Test Report: QEMU_macOS 19409

                    
                      edd4f56319c0ca210375a4ae17d17ce22fec0e34:2024-08-12:35748
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.04
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.13
36 TestAddons/Setup 10.12
37 TestCertOptions 10.02
38 TestCertExpiration 195.18
39 TestDockerFlags 10.11
40 TestForceSystemdFlag 10.16
41 TestForceSystemdEnv 10.07
47 TestErrorSpam/setup 9.81
56 TestFunctional/serial/StartWithProxy 9.97
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.11
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.26
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.27
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.04
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 109.34
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 36.34
150 TestMultiControlPlane/serial/StartCluster 9.9
151 TestMultiControlPlane/serial/DeployApp 117.94
152 TestMultiControlPlane/serial/PingHostFromPods 0.08
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.07
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 59.6
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.07
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.39
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 3.3
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.84
174 TestJSONOutput/start/Command 9.79
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.04
203 TestMinikubeProfile 10.35
206 TestMountStart/serial/StartWithMountFirst 10.01
209 TestMultiNode/serial/FreshStart2Nodes 9.93
210 TestMultiNode/serial/DeployApp2Nodes 115.66
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.07
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.13
217 TestMultiNode/serial/StartAfterStop 58.4
218 TestMultiNode/serial/RestartKeepsNodes 8.67
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.32
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.85
226 TestPreload 10.21
228 TestScheduledStopUnix 9.93
229 TestSkaffold 12.4
232 TestRunningBinaryUpgrade 588
234 TestKubernetesUpgrade 19.06
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.42
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.22
250 TestStoppedBinaryUpgrade/Upgrade 575.28
252 TestPause/serial/Start 10.04
262 TestNoKubernetes/serial/StartWithK8s 9.8
263 TestNoKubernetes/serial/StartWithStopK8s 5.3
264 TestNoKubernetes/serial/Start 5.26
268 TestNoKubernetes/serial/StartNoArgs 5.31
270 TestNetworkPlugins/group/auto/Start 9.83
271 TestNetworkPlugins/group/kindnet/Start 9.74
272 TestNetworkPlugins/group/calico/Start 9.88
273 TestNetworkPlugins/group/custom-flannel/Start 9.78
274 TestNetworkPlugins/group/false/Start 9.86
275 TestNetworkPlugins/group/enable-default-cni/Start 9.84
276 TestNetworkPlugins/group/flannel/Start 9.83
277 TestNetworkPlugins/group/bridge/Start 9.81
278 TestNetworkPlugins/group/kubenet/Start 9.74
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.76
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.99
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.24
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 10.17
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.98
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
311 TestStartStop/group/embed-certs/serial/SecondStart 5.28
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.16
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.09
321 TestStartStop/group/newest-cni/serial/FirstStart 9.89
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.033771333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf87f0ee-4c10-41b0-8a49-cef9e4c347b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-858000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10202bbf-ab6d-4595-9f95-d4d18a5e653e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19409"}}
	{"specversion":"1.0","id":"2dd0d9dd-f709-4b9e-926f-6477cdf0f34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig"}}
	{"specversion":"1.0","id":"49dd9da5-620f-4fb4-9102-bc3503ccd596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f0b89660-9c58-4381-91f8-c4982674f7f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30bd5511-b1d9-4eb4-95d7-010612eb0b1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube"}}
	{"specversion":"1.0","id":"52eff5cf-f3da-4242-87dd-1730c1ece8cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1bbf746f-b8ce-421a-8630-25962fa71cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"04f713be-e5be-4312-88c1-c69ed26c98f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"175f0214-4092-4a5e-9dac-1a5b47ae23d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3634ddd6-ba92-4f48-acf2-740a2805d5a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-858000\" primary control-plane node in \"download-only-858000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"93291b9e-b49c-4d42-8762-275c73da71b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"67fc7427-7162-41d5-b956-5a7b3c56421f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40] Decompressors:map[bz2:0x1400000e250 gz:0x1400000e258 tar:0x1400000e1c0 tar.bz2:0x1400000e200 tar.gz:0x1400000e210 tar.xz:0x1400000e220 tar.zst:0x1400000e230 tbz2:0x1400000e200 tgz:0x14
00000e210 txz:0x1400000e220 tzst:0x1400000e230 xz:0x1400000e260 zip:0x1400000e270 zst:0x1400000e268] Getters:map[file:0x140014205c0 http:0x1400086e500 https:0x1400086e550] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"e4e1da7f-a6a3-40fc-a1fb-7f5f3a8805b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:19:15.899140    6843 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:19:15.899316    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:15.899320    6843 out.go:304] Setting ErrFile to fd 2...
	I0812 03:19:15.899322    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:15.899480    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	W0812 03:19:15.899574    6843 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19409-6342/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19409-6342/.minikube/config/config.json: no such file or directory
	I0812 03:19:15.900836    6843 out.go:298] Setting JSON to true
	I0812 03:19:15.919728    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4725,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:19:15.919797    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:19:15.926377    6843 out.go:97] [download-only-858000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:19:15.926498    6843 notify.go:220] Checking for updates...
	W0812 03:19:15.926551    6843 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball: no such file or directory
	I0812 03:19:15.930458    6843 out.go:169] MINIKUBE_LOCATION=19409
	I0812 03:19:15.933886    6843 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:19:15.939751    6843 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:19:15.942823    6843 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:19:15.946765    6843 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	W0812 03:19:15.952456    6843 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 03:19:15.952633    6843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:19:15.956369    6843 out.go:97] Using the qemu2 driver based on user configuration
	I0812 03:19:15.956386    6843 start.go:297] selected driver: qemu2
	I0812 03:19:15.956399    6843 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:19:15.956470    6843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:19:15.960304    6843 out.go:169] Automatically selected the socket_vmnet network
	I0812 03:19:15.966239    6843 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0812 03:19:15.966338    6843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:19:15.966397    6843 cni.go:84] Creating CNI manager for ""
	I0812 03:19:15.966414    6843 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 03:19:15.966467    6843 start.go:340] cluster config:
	{Name:download-only-858000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-858000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:19:15.970177    6843 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:19:15.974397    6843 out.go:97] Downloading VM boot image ...
	I0812 03:19:15.974423    6843 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0812 03:19:22.236988    6843 out.go:97] Starting "download-only-858000" primary control-plane node in "download-only-858000" cluster
	I0812 03:19:22.237031    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:22.292398    6843 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:22.292416    6843 cache.go:56] Caching tarball of preloaded images
	I0812 03:19:22.293203    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:22.297587    6843 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0812 03:19:22.297593    6843 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:22.371176    6843 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:29.637345    6843 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:29.637515    6843 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:30.333514    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0812 03:19:30.333713    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/download-only-858000/config.json ...
	I0812 03:19:30.333730    6843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/download-only-858000/config.json: {Name:mk6762fe2e2f4c26319b8a4a357a4ba0c4bb833b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:19:30.333961    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:30.334159    6843 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0812 03:19:30.850864    6843 out.go:169] 
	W0812 03:19:30.857887    6843 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40] Decompressors:map[bz2:0x1400000e250 gz:0x1400000e258 tar:0x1400000e1c0 tar.bz2:0x1400000e200 tar.gz:0x1400000e210 tar.xz:0x1400000e220 tar.zst:0x1400000e230 tbz2:0x1400000e200 tgz:0x1400000e210 txz:0x1400000e220 tzst:0x1400000e230 xz:0x1400000e260 zip:0x1400000e270 zst:0x1400000e268] Getters:map[file:0x140014205c0 http:0x1400086e500 https:0x1400086e550] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0812 03:19:30.857915    6843 out_reason.go:110] 
	W0812 03:19:30.866816    6843 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:19:30.870826    6843 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-858000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-441000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-441000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.002769375s)

                                                
                                                
-- stdout --
	* [offline-docker-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-441000" primary control-plane node in "offline-docker-441000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:32:01.768309    8616 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:32:01.768459    8616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:01.768466    8616 out.go:304] Setting ErrFile to fd 2...
	I0812 03:32:01.768468    8616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:01.768604    8616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:32:01.769843    8616 out.go:298] Setting JSON to false
	I0812 03:32:01.787579    8616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5491,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:32:01.787670    8616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:32:01.792442    8616 out.go:177] * [offline-docker-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:32:01.800435    8616 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:32:01.800464    8616 notify.go:220] Checking for updates...
	I0812 03:32:01.807409    8616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:32:01.810417    8616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:32:01.811603    8616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:32:01.814398    8616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:32:01.817427    8616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:32:01.820760    8616 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:01.820813    8616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:32:01.824331    8616 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:32:01.831368    8616 start.go:297] selected driver: qemu2
	I0812 03:32:01.831379    8616 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:32:01.831386    8616 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:32:01.833431    8616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:32:01.836372    8616 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:32:01.839503    8616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:32:01.839519    8616 cni.go:84] Creating CNI manager for ""
	I0812 03:32:01.839526    8616 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:32:01.839529    8616 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:32:01.839562    8616 start.go:340] cluster config:
	{Name:offline-docker-441000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-441000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:32:01.843183    8616 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:32:01.850400    8616 out.go:177] * Starting "offline-docker-441000" primary control-plane node in "offline-docker-441000" cluster
	I0812 03:32:01.854402    8616 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:32:01.854437    8616 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:32:01.854448    8616 cache.go:56] Caching tarball of preloaded images
	I0812 03:32:01.854522    8616 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:32:01.854528    8616 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:32:01.854607    8616 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/offline-docker-441000/config.json ...
	I0812 03:32:01.854619    8616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/offline-docker-441000/config.json: {Name:mke3bc4e6d9ca8932fdf34f44041bed9fc29feff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:32:01.854851    8616 start.go:360] acquireMachinesLock for offline-docker-441000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:01.854890    8616 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "offline-docker-441000"
	I0812 03:32:01.854903    8616 start.go:93] Provisioning new machine with config: &{Name:offline-docker-441000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-441000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:01.854935    8616 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:01.863368    8616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:01.879446    8616 start.go:159] libmachine.API.Create for "offline-docker-441000" (driver="qemu2")
	I0812 03:32:01.879486    8616 client.go:168] LocalClient.Create starting
	I0812 03:32:01.879576    8616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:01.879608    8616 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:01.879622    8616 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:01.879670    8616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:01.879693    8616 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:01.879700    8616 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:01.880079    8616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:02.033403    8616 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:02.113905    8616 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:02.113919    8616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:02.114109    8616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:02.123795    8616 main.go:141] libmachine: STDOUT: 
	I0812 03:32:02.123828    8616 main.go:141] libmachine: STDERR: 
	I0812 03:32:02.123905    8616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2 +20000M
	I0812 03:32:02.132771    8616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:02.132790    8616 main.go:141] libmachine: STDERR: 
	I0812 03:32:02.132808    8616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:02.132812    8616 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:02.132833    8616 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:02.132858    8616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c9:7b:4c:50:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:02.134581    8616 main.go:141] libmachine: STDOUT: 
	I0812 03:32:02.134599    8616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:02.134628    8616 client.go:171] duration metric: took 255.141958ms to LocalClient.Create
	I0812 03:32:04.136685    8616 start.go:128] duration metric: took 2.281779792s to createHost
	I0812 03:32:04.136704    8616 start.go:83] releasing machines lock for "offline-docker-441000", held for 2.281847s
	W0812 03:32:04.136718    8616 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:04.142331    8616 out.go:177] * Deleting "offline-docker-441000" in qemu2 ...
	W0812 03:32:04.156641    8616 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:04.156652    8616 start.go:729] Will try again in 5 seconds ...
	I0812 03:32:09.158841    8616 start.go:360] acquireMachinesLock for offline-docker-441000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:09.159188    8616 start.go:364] duration metric: took 262.334µs to acquireMachinesLock for "offline-docker-441000"
	I0812 03:32:09.159317    8616 start.go:93] Provisioning new machine with config: &{Name:offline-docker-441000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-441000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:09.159668    8616 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:09.170213    8616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:09.221012    8616 start.go:159] libmachine.API.Create for "offline-docker-441000" (driver="qemu2")
	I0812 03:32:09.221075    8616 client.go:168] LocalClient.Create starting
	I0812 03:32:09.221197    8616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:09.221260    8616 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:09.221282    8616 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:09.221344    8616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:09.221388    8616 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:09.221401    8616 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:09.221896    8616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:09.385810    8616 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:09.674657    8616 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:09.674671    8616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:09.674939    8616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:09.684725    8616 main.go:141] libmachine: STDOUT: 
	I0812 03:32:09.684745    8616 main.go:141] libmachine: STDERR: 
	I0812 03:32:09.684804    8616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2 +20000M
	I0812 03:32:09.692809    8616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:09.692825    8616 main.go:141] libmachine: STDERR: 
	I0812 03:32:09.692837    8616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:09.692841    8616 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:09.692852    8616 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:09.692891    8616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:34:52:3a:b2:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/offline-docker-441000/disk.qcow2
	I0812 03:32:09.694468    8616 main.go:141] libmachine: STDOUT: 
	I0812 03:32:09.694481    8616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:09.694493    8616 client.go:171] duration metric: took 473.420292ms to LocalClient.Create
	I0812 03:32:11.696646    8616 start.go:128] duration metric: took 2.536984959s to createHost
	I0812 03:32:11.696703    8616 start.go:83] releasing machines lock for "offline-docker-441000", held for 2.537531708s
	W0812 03:32:11.697070    8616 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:11.711784    8616 out.go:177] 
	W0812 03:32:11.714786    8616 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:32:11.714833    8616 out.go:239] * 
	* 
	W0812 03:32:11.717592    8616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:32:11.728711    8616 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-441000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-12 03:32:11.742166 -0700 PDT m=+775.935572126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-441000 -n offline-docker-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-441000 -n offline-docker-441000: exit status 7 (53.462041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-441000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-441000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-441000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestAddons/Setup (10.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-717000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-717000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.115946917s)

                                                
                                                
-- stdout --
	* [addons-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-717000" primary control-plane node in "addons-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:19:51.925415    6950 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:19:51.925555    6950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:51.925558    6950 out.go:304] Setting ErrFile to fd 2...
	I0812 03:19:51.925564    6950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:51.925699    6950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:19:51.926864    6950 out.go:298] Setting JSON to false
	I0812 03:19:51.942901    6950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4761,"bootTime":1723453230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:19:51.942964    6950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:19:51.947336    6950 out.go:177] * [addons-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:19:51.954358    6950 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:19:51.954416    6950 notify.go:220] Checking for updates...
	I0812 03:19:51.961290    6950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:19:51.964304    6950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:19:51.967324    6950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:19:51.970268    6950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:19:51.973293    6950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:19:51.976412    6950 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:19:51.980229    6950 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:19:51.987360    6950 start.go:297] selected driver: qemu2
	I0812 03:19:51.987365    6950 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:19:51.987372    6950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:19:51.989699    6950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:19:51.993316    6950 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:19:51.996391    6950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:19:51.996406    6950 cni.go:84] Creating CNI manager for ""
	I0812 03:19:51.996412    6950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:19:51.996417    6950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:19:51.996450    6950 start.go:340] cluster config:
	{Name:addons-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:19:52.000222    6950 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:19:52.008322    6950 out.go:177] * Starting "addons-717000" primary control-plane node in "addons-717000" cluster
	I0812 03:19:52.012307    6950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:19:52.012321    6950 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:19:52.012332    6950 cache.go:56] Caching tarball of preloaded images
	I0812 03:19:52.012394    6950 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:19:52.012400    6950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:19:52.012587    6950 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/addons-717000/config.json ...
	I0812 03:19:52.012599    6950 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/addons-717000/config.json: {Name:mk4175789ad93aba7bff33e8ec9fabfa29ce5819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:19:52.013001    6950 start.go:360] acquireMachinesLock for addons-717000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:19:52.013065    6950 start.go:364] duration metric: took 58.541µs to acquireMachinesLock for "addons-717000"
	I0812 03:19:52.013077    6950 start.go:93] Provisioning new machine with config: &{Name:addons-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:19:52.013108    6950 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:19:52.021275    6950 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 03:19:52.040695    6950 start.go:159] libmachine.API.Create for "addons-717000" (driver="qemu2")
	I0812 03:19:52.040730    6950 client.go:168] LocalClient.Create starting
	I0812 03:19:52.040870    6950 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:19:52.110154    6950 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:19:52.163438    6950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:19:52.424821    6950 main.go:141] libmachine: Creating SSH key...
	I0812 03:19:52.522553    6950 main.go:141] libmachine: Creating Disk image...
	I0812 03:19:52.522558    6950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:19:52.522772    6950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:52.532173    6950 main.go:141] libmachine: STDOUT: 
	I0812 03:19:52.532197    6950 main.go:141] libmachine: STDERR: 
	I0812 03:19:52.532248    6950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2 +20000M
	I0812 03:19:52.540219    6950 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:19:52.540245    6950 main.go:141] libmachine: STDERR: 
	I0812 03:19:52.540261    6950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:52.540265    6950 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:19:52.540283    6950 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:19:52.540315    6950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:f3:44:ce:fa:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:52.541944    6950 main.go:141] libmachine: STDOUT: 
	I0812 03:19:52.541960    6950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:19:52.541978    6950 client.go:171] duration metric: took 501.242125ms to LocalClient.Create
	I0812 03:19:54.544124    6950 start.go:128] duration metric: took 2.5310355s to createHost
	I0812 03:19:54.544182    6950 start.go:83] releasing machines lock for "addons-717000", held for 2.531148042s
	W0812 03:19:54.544278    6950 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:19:54.554636    6950 out.go:177] * Deleting "addons-717000" in qemu2 ...
	W0812 03:19:54.582779    6950 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:19:54.582807    6950 start.go:729] Will try again in 5 seconds ...
	I0812 03:19:59.584933    6950 start.go:360] acquireMachinesLock for addons-717000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:19:59.585405    6950 start.go:364] duration metric: took 323.083µs to acquireMachinesLock for "addons-717000"
	I0812 03:19:59.585521    6950 start.go:93] Provisioning new machine with config: &{Name:addons-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:19:59.585895    6950 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:19:59.596535    6950 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 03:19:59.647251    6950 start.go:159] libmachine.API.Create for "addons-717000" (driver="qemu2")
	I0812 03:19:59.647332    6950 client.go:168] LocalClient.Create starting
	I0812 03:19:59.647455    6950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:19:59.647515    6950 main.go:141] libmachine: Decoding PEM data...
	I0812 03:19:59.647539    6950 main.go:141] libmachine: Parsing certificate...
	I0812 03:19:59.647613    6950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:19:59.647661    6950 main.go:141] libmachine: Decoding PEM data...
	I0812 03:19:59.647676    6950 main.go:141] libmachine: Parsing certificate...
	I0812 03:19:59.648182    6950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:19:59.811463    6950 main.go:141] libmachine: Creating SSH key...
	I0812 03:19:59.947505    6950 main.go:141] libmachine: Creating Disk image...
	I0812 03:19:59.947517    6950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:19:59.947716    6950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:59.957094    6950 main.go:141] libmachine: STDOUT: 
	I0812 03:19:59.957112    6950 main.go:141] libmachine: STDERR: 
	I0812 03:19:59.957167    6950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2 +20000M
	I0812 03:19:59.964967    6950 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:19:59.964984    6950 main.go:141] libmachine: STDERR: 
	I0812 03:19:59.964998    6950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:59.965001    6950 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:19:59.965020    6950 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:19:59.965057    6950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d8:42:79:ed:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/addons-717000/disk.qcow2
	I0812 03:19:59.966672    6950 main.go:141] libmachine: STDOUT: 
	I0812 03:19:59.966687    6950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:19:59.966699    6950 client.go:171] duration metric: took 319.365625ms to LocalClient.Create
	I0812 03:20:01.968812    6950 start.go:128] duration metric: took 2.382911042s to createHost
	I0812 03:20:01.968891    6950 start.go:83] releasing machines lock for "addons-717000", held for 2.383471125s
	W0812 03:20:01.969328    6950 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:20:01.982764    6950 out.go:177] 
	W0812 03:20:01.988884    6950 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:20:01.988920    6950 out.go:239] * 
	* 
	W0812 03:20:01.991638    6950 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:20:01.999792    6950 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-717000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.12s)

                                                
                                    
x
+
TestCertOptions (10.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-348000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-348000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.758321167s)

                                                
                                                
-- stdout --
	* [cert-options-348000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-348000" primary control-plane node in "cert-options-348000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-348000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-348000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-348000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-348000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-348000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.482291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-348000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-348000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-348000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-348000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-348000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-348000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.983708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-348000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-348000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-348000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-348000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-348000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-12 03:32:41.968983 -0700 PDT m=+806.162879626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-348000 -n cert-options-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-348000 -n cert-options-348000: exit status 7 (28.856875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-348000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-348000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-348000
--- FAIL: TestCertOptions (10.02s)

                                                
                                    
x
+
TestCertExpiration (195.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.842955459s)

                                                
                                                
-- stdout --
	* [cert-expiration-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-736000" primary control-plane node in "cert-expiration-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.22427575s)

                                                
                                                
-- stdout --
	* [cert-expiration-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-736000" primary control-plane node in "cert-expiration-736000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-736000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-736000" primary control-plane node in "cert-expiration-736000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-12 03:35:42.066235 -0700 PDT m=+986.253097918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-736000 -n cert-expiration-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-736000 -n cert-expiration-736000: exit status 7 (32.186458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-736000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-736000
--- FAIL: TestCertExpiration (195.18s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-150000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-150000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.885514458s)

                                                
                                                
-- stdout --
	* [docker-flags-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-150000" primary control-plane node in "docker-flags-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:32:21.969957    8809 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:32:21.970085    8809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:21.970089    8809 out.go:304] Setting ErrFile to fd 2...
	I0812 03:32:21.970093    8809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:21.970248    8809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:32:21.971362    8809 out.go:298] Setting JSON to false
	I0812 03:32:21.987410    8809 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5511,"bootTime":1723453230,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:32:21.987480    8809 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:32:21.993448    8809 out.go:177] * [docker-flags-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:32:22.000241    8809 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:32:22.000288    8809 notify.go:220] Checking for updates...
	I0812 03:32:22.008381    8809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:32:22.009740    8809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:32:22.012376    8809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:32:22.015398    8809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:32:22.018385    8809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:32:22.021739    8809 config.go:182] Loaded profile config "force-systemd-flag-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:22.021808    8809 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:22.021853    8809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:32:22.026323    8809 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:32:22.033419    8809 start.go:297] selected driver: qemu2
	I0812 03:32:22.033426    8809 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:32:22.033434    8809 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:32:22.035718    8809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:32:22.038385    8809 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:32:22.041488    8809 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0812 03:32:22.041523    8809 cni.go:84] Creating CNI manager for ""
	I0812 03:32:22.041531    8809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:32:22.041538    8809 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:32:22.041569    8809 start.go:340] cluster config:
	{Name:docker-flags-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:32:22.045131    8809 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:32:22.052360    8809 out.go:177] * Starting "docker-flags-150000" primary control-plane node in "docker-flags-150000" cluster
	I0812 03:32:22.055278    8809 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:32:22.055293    8809 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:32:22.055301    8809 cache.go:56] Caching tarball of preloaded images
	I0812 03:32:22.055369    8809 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:32:22.055374    8809 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:32:22.055443    8809 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/docker-flags-150000/config.json ...
	I0812 03:32:22.055453    8809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/docker-flags-150000/config.json: {Name:mk97ba7a1cc0da4889f3481ee78b9d879671bbf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:32:22.055665    8809 start.go:360] acquireMachinesLock for docker-flags-150000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:22.055699    8809 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "docker-flags-150000"
	I0812 03:32:22.055712    8809 start.go:93] Provisioning new machine with config: &{Name:docker-flags-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:22.055740    8809 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:22.064239    8809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:22.081566    8809 start.go:159] libmachine.API.Create for "docker-flags-150000" (driver="qemu2")
	I0812 03:32:22.081595    8809 client.go:168] LocalClient.Create starting
	I0812 03:32:22.081667    8809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:22.081697    8809 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:22.081705    8809 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:22.081748    8809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:22.081770    8809 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:22.081777    8809 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:22.082114    8809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:22.241032    8809 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:22.426176    8809 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:22.426184    8809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:22.426419    8809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:22.436087    8809 main.go:141] libmachine: STDOUT: 
	I0812 03:32:22.436101    8809 main.go:141] libmachine: STDERR: 
	I0812 03:32:22.436140    8809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2 +20000M
	I0812 03:32:22.444055    8809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:22.444065    8809 main.go:141] libmachine: STDERR: 
	I0812 03:32:22.444081    8809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:22.444084    8809 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:22.444097    8809 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:22.444131    8809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:54:4a:7f:36:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:22.445757    8809 main.go:141] libmachine: STDOUT: 
	I0812 03:32:22.445769    8809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:22.445786    8809 client.go:171] duration metric: took 364.192083ms to LocalClient.Create
	I0812 03:32:24.447931    8809 start.go:128] duration metric: took 2.392206875s to createHost
	I0812 03:32:24.447988    8809 start.go:83] releasing machines lock for "docker-flags-150000", held for 2.392314667s
	W0812 03:32:24.448036    8809 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:24.473099    8809 out.go:177] * Deleting "docker-flags-150000" in qemu2 ...
	W0812 03:32:24.494311    8809 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:24.494331    8809 start.go:729] Will try again in 5 seconds ...
	I0812 03:32:29.496381    8809 start.go:360] acquireMachinesLock for docker-flags-150000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:29.496693    8809 start.go:364] duration metric: took 245.125µs to acquireMachinesLock for "docker-flags-150000"
	I0812 03:32:29.496824    8809 start.go:93] Provisioning new machine with config: &{Name:docker-flags-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:29.497112    8809 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:29.505612    8809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:29.547806    8809 start.go:159] libmachine.API.Create for "docker-flags-150000" (driver="qemu2")
	I0812 03:32:29.547865    8809 client.go:168] LocalClient.Create starting
	I0812 03:32:29.548041    8809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:29.548123    8809 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:29.548141    8809 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:29.548225    8809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:29.548270    8809 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:29.548282    8809 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:29.549417    8809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:29.721102    8809 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:29.763202    8809 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:29.763208    8809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:29.763404    8809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:29.772682    8809 main.go:141] libmachine: STDOUT: 
	I0812 03:32:29.772702    8809 main.go:141] libmachine: STDERR: 
	I0812 03:32:29.772756    8809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2 +20000M
	I0812 03:32:29.780662    8809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:29.780676    8809 main.go:141] libmachine: STDERR: 
	I0812 03:32:29.780694    8809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:29.780699    8809 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:29.780707    8809 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:29.780732    8809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:09:50:9e:8b:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/docker-flags-150000/disk.qcow2
	I0812 03:32:29.782338    8809 main.go:141] libmachine: STDOUT: 
	I0812 03:32:29.782351    8809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:29.782368    8809 client.go:171] duration metric: took 234.500666ms to LocalClient.Create
	I0812 03:32:31.784515    8809 start.go:128] duration metric: took 2.287414875s to createHost
	I0812 03:32:31.784610    8809 start.go:83] releasing machines lock for "docker-flags-150000", held for 2.287895292s
	W0812 03:32:31.784895    8809 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:31.793459    8809 out.go:177] 
	W0812 03:32:31.800394    8809 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:32:31.800415    8809 out.go:239] * 
	* 
	W0812 03:32:31.803122    8809 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:32:31.813424    8809 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-150000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-150000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-150000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (71.552542ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-150000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-150000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-150000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-150000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-150000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-150000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-150000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-150000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-150000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.306958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-150000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-150000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-150000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-150000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-150000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-150000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-12 03:32:31.952277 -0700 PDT m=+796.146010751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-150000 -n docker-flags-150000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-150000 -n docker-flags-150000: exit status 7 (28.297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-150000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-150000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-150000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-421000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-421000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.995961084s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-421000" primary control-plane node in "force-systemd-flag-421000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-421000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:32:16.859650    8784 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:32:16.859780    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:16.859784    8784 out.go:304] Setting ErrFile to fd 2...
	I0812 03:32:16.859786    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:16.859891    8784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:32:16.860925    8784 out.go:298] Setting JSON to false
	I0812 03:32:16.876686    8784 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5506,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:32:16.876749    8784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:32:16.882835    8784 out.go:177] * [force-systemd-flag-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:32:16.889874    8784 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:32:16.889941    8784 notify.go:220] Checking for updates...
	I0812 03:32:16.896766    8784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:32:16.903811    8784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:32:16.906763    8784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:32:16.909860    8784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:32:16.912867    8784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:32:16.916162    8784 config.go:182] Loaded profile config "force-systemd-env-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:16.916230    8784 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:16.916280    8784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:32:16.920953    8784 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:32:16.926852    8784 start.go:297] selected driver: qemu2
	I0812 03:32:16.926859    8784 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:32:16.926868    8784 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:32:16.929093    8784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:32:16.931811    8784 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:32:16.934931    8784 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:32:16.934965    8784 cni.go:84] Creating CNI manager for ""
	I0812 03:32:16.934971    8784 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:32:16.934980    8784 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:32:16.935004    8784 start.go:340] cluster config:
	{Name:force-systemd-flag-421000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:32:16.938661    8784 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:32:16.945802    8784 out.go:177] * Starting "force-systemd-flag-421000" primary control-plane node in "force-systemd-flag-421000" cluster
	I0812 03:32:16.949838    8784 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:32:16.949857    8784 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:32:16.949871    8784 cache.go:56] Caching tarball of preloaded images
	I0812 03:32:16.949940    8784 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:32:16.949946    8784 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:32:16.950018    8784 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/force-systemd-flag-421000/config.json ...
	I0812 03:32:16.950029    8784 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/force-systemd-flag-421000/config.json: {Name:mk13c0b7e539b762e038144919a2de608abf0041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:32:16.950252    8784 start.go:360] acquireMachinesLock for force-systemd-flag-421000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:16.950291    8784 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "force-systemd-flag-421000"
	I0812 03:32:16.950305    8784 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:16.950333    8784 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:16.958876    8784 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:16.977074    8784 start.go:159] libmachine.API.Create for "force-systemd-flag-421000" (driver="qemu2")
	I0812 03:32:16.977120    8784 client.go:168] LocalClient.Create starting
	I0812 03:32:16.977187    8784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:16.977218    8784 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:16.977228    8784 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:16.977274    8784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:16.977299    8784 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:16.977308    8784 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:16.977669    8784 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:17.130394    8784 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:17.268234    8784 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:17.268244    8784 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:17.268454    8784 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:17.277868    8784 main.go:141] libmachine: STDOUT: 
	I0812 03:32:17.277887    8784 main.go:141] libmachine: STDERR: 
	I0812 03:32:17.277931    8784 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2 +20000M
	I0812 03:32:17.285758    8784 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:17.285773    8784 main.go:141] libmachine: STDERR: 
	I0812 03:32:17.285789    8784 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:17.285817    8784 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:17.285826    8784 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:17.285855    8784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:e6:e3:50:e5:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:17.287404    8784 main.go:141] libmachine: STDOUT: 
	I0812 03:32:17.287417    8784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:17.287435    8784 client.go:171] duration metric: took 310.314792ms to LocalClient.Create
	I0812 03:32:19.289643    8784 start.go:128] duration metric: took 2.339317125s to createHost
	I0812 03:32:19.289708    8784 start.go:83] releasing machines lock for "force-systemd-flag-421000", held for 2.339446417s
	W0812 03:32:19.289763    8784 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:19.305897    8784 out.go:177] * Deleting "force-systemd-flag-421000" in qemu2 ...
	W0812 03:32:19.333892    8784 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:19.333920    8784 start.go:729] Will try again in 5 seconds ...
	I0812 03:32:24.336068    8784 start.go:360] acquireMachinesLock for force-systemd-flag-421000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:24.448112    8784 start.go:364] duration metric: took 111.934458ms to acquireMachinesLock for "force-systemd-flag-421000"
	I0812 03:32:24.448237    8784 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:24.448508    8784 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:24.461163    8784 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:24.510327    8784 start.go:159] libmachine.API.Create for "force-systemd-flag-421000" (driver="qemu2")
	I0812 03:32:24.510380    8784 client.go:168] LocalClient.Create starting
	I0812 03:32:24.510496    8784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:24.510559    8784 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:24.510576    8784 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:24.510639    8784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:24.510681    8784 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:24.510692    8784 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:24.511313    8784 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:24.674338    8784 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:24.760093    8784 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:24.760099    8784 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:24.760320    8784 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:24.769626    8784 main.go:141] libmachine: STDOUT: 
	I0812 03:32:24.769641    8784 main.go:141] libmachine: STDERR: 
	I0812 03:32:24.769697    8784 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2 +20000M
	I0812 03:32:24.777606    8784 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:24.777622    8784 main.go:141] libmachine: STDERR: 
	I0812 03:32:24.777634    8784 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:24.777638    8784 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:24.777645    8784 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:24.777676    8784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:7d:3d:27:59:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-flag-421000/disk.qcow2
	I0812 03:32:24.779288    8784 main.go:141] libmachine: STDOUT: 
	I0812 03:32:24.779303    8784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:24.779316    8784 client.go:171] duration metric: took 268.934625ms to LocalClient.Create
	I0812 03:32:26.781411    8784 start.go:128] duration metric: took 2.332925042s to createHost
	I0812 03:32:26.781434    8784 start.go:83] releasing machines lock for "force-systemd-flag-421000", held for 2.333336667s
	W0812 03:32:26.781526    8784 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:26.797758    8784 out.go:177] 
	W0812 03:32:26.806806    8784 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:32:26.806812    8784 out.go:239] * 
	* 
	W0812 03:32:26.807369    8784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:32:26.819756    8784 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-421000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-421000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-421000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (45.288083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-421000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-421000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-421000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-12 03:32:26.875529 -0700 PDT m=+791.069180835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-421000 -n force-systemd-flag-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-421000 -n force-systemd-flag-421000: exit status 7 (28.081791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-421000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-421000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (10.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-569000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-569000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.882938542s)

                                                
                                                
-- stdout --
	* [force-systemd-env-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-569000" primary control-plane node in "force-systemd-env-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:32:11.896795    8750 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:32:11.896914    8750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:11.896917    8750 out.go:304] Setting ErrFile to fd 2...
	I0812 03:32:11.896919    8750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:32:11.897038    8750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:32:11.898149    8750 out.go:298] Setting JSON to false
	I0812 03:32:11.914717    8750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5501,"bootTime":1723453230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:32:11.914794    8750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:32:11.920720    8750 out.go:177] * [force-systemd-env-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:32:11.926843    8750 notify.go:220] Checking for updates...
	I0812 03:32:11.932730    8750 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:32:11.941661    8750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:32:11.949577    8750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:32:11.960707    8750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:32:11.968650    8750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:32:11.976619    8750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0812 03:32:11.981029    8750 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:32:11.981078    8750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:32:11.984560    8750 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:32:11.991623    8750 start.go:297] selected driver: qemu2
	I0812 03:32:11.991629    8750 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:32:11.991634    8750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:32:11.994069    8750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:32:11.997699    8750 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:32:12.000771    8750 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:32:12.000786    8750 cni.go:84] Creating CNI manager for ""
	I0812 03:32:12.000794    8750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:32:12.000798    8750 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:32:12.000835    8750 start.go:340] cluster config:
	{Name:force-systemd-env-569000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:32:12.004965    8750 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:32:12.012682    8750 out.go:177] * Starting "force-systemd-env-569000" primary control-plane node in "force-systemd-env-569000" cluster
	I0812 03:32:12.016730    8750 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:32:12.016759    8750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:32:12.016769    8750 cache.go:56] Caching tarball of preloaded images
	I0812 03:32:12.016845    8750 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:32:12.016851    8750 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:32:12.016917    8750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/force-systemd-env-569000/config.json ...
	I0812 03:32:12.016928    8750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/force-systemd-env-569000/config.json: {Name:mk806551f772dd720dfc0a1944f42fa6fe6e468e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:32:12.017146    8750 start.go:360] acquireMachinesLock for force-systemd-env-569000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:12.017177    8750 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "force-systemd-env-569000"
	I0812 03:32:12.017195    8750 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:12.017220    8750 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:12.024646    8750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:12.039636    8750 start.go:159] libmachine.API.Create for "force-systemd-env-569000" (driver="qemu2")
	I0812 03:32:12.039668    8750 client.go:168] LocalClient.Create starting
	I0812 03:32:12.039727    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:12.039755    8750 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:12.039767    8750 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:12.039803    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:12.039825    8750 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:12.039836    8750 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:12.040196    8750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:12.194648    8750 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:12.290497    8750 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:12.290510    8750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:12.290750    8750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:12.300115    8750 main.go:141] libmachine: STDOUT: 
	I0812 03:32:12.300138    8750 main.go:141] libmachine: STDERR: 
	I0812 03:32:12.300186    8750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2 +20000M
	I0812 03:32:12.308376    8750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:12.308390    8750 main.go:141] libmachine: STDERR: 
	I0812 03:32:12.308411    8750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:12.308415    8750 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:12.308427    8750 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:12.308450    8750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6e:fa:65:31:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:12.310088    8750 main.go:141] libmachine: STDOUT: 
	I0812 03:32:12.310102    8750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:12.310122    8750 client.go:171] duration metric: took 270.452833ms to LocalClient.Create
	I0812 03:32:14.312311    8750 start.go:128] duration metric: took 2.295093917s to createHost
	I0812 03:32:14.312437    8750 start.go:83] releasing machines lock for "force-systemd-env-569000", held for 2.295286416s
	W0812 03:32:14.312484    8750 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:14.323511    8750 out.go:177] * Deleting "force-systemd-env-569000" in qemu2 ...
	W0812 03:32:14.354310    8750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:14.354340    8750 start.go:729] Will try again in 5 seconds ...
	I0812 03:32:19.356457    8750 start.go:360] acquireMachinesLock for force-systemd-env-569000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:32:19.356809    8750 start.go:364] duration metric: took 253.458µs to acquireMachinesLock for "force-systemd-env-569000"
	I0812 03:32:19.356870    8750 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:32:19.357112    8750 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:32:19.364847    8750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0812 03:32:19.410120    8750 start.go:159] libmachine.API.Create for "force-systemd-env-569000" (driver="qemu2")
	I0812 03:32:19.410180    8750 client.go:168] LocalClient.Create starting
	I0812 03:32:19.410324    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:32:19.410394    8750 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:19.410415    8750 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:19.410479    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:32:19.410522    8750 main.go:141] libmachine: Decoding PEM data...
	I0812 03:32:19.410538    8750 main.go:141] libmachine: Parsing certificate...
	I0812 03:32:19.411727    8750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:32:19.581859    8750 main.go:141] libmachine: Creating SSH key...
	I0812 03:32:19.688636    8750 main.go:141] libmachine: Creating Disk image...
	I0812 03:32:19.688641    8750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:32:19.688851    8750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:19.698360    8750 main.go:141] libmachine: STDOUT: 
	I0812 03:32:19.698382    8750 main.go:141] libmachine: STDERR: 
	I0812 03:32:19.698430    8750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2 +20000M
	I0812 03:32:19.706306    8750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:32:19.706329    8750 main.go:141] libmachine: STDERR: 
	I0812 03:32:19.706340    8750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:19.706345    8750 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:32:19.706351    8750 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:32:19.706379    8750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ff:06:b6:d7:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/force-systemd-env-569000/disk.qcow2
	I0812 03:32:19.708017    8750 main.go:141] libmachine: STDOUT: 
	I0812 03:32:19.708031    8750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:32:19.708042    8750 client.go:171] duration metric: took 297.862583ms to LocalClient.Create
	I0812 03:32:21.710193    8750 start.go:128] duration metric: took 2.35309125s to createHost
	I0812 03:32:21.710268    8750 start.go:83] releasing machines lock for "force-systemd-env-569000", held for 2.353475958s
	W0812 03:32:21.710654    8750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:32:21.721284    8750 out.go:177] 
	W0812 03:32:21.725319    8750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:32:21.725403    8750 out.go:239] * 
	* 
	W0812 03:32:21.727903    8750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:32:21.738116    8750 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-569000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-569000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-569000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.603417ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-569000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-569000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-12 03:32:21.83452 -0700 PDT m=+786.028089876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-569000 -n force-systemd-env-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-569000 -n force-systemd-env-569000: exit status 7 (33.866584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-569000
--- FAIL: TestForceSystemdEnv (10.07s)

                                                
                                    
x
+
TestErrorSpam/setup (9.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-338000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-338000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 --driver=qemu2 : exit status 80 (9.805999792s)

                                                
                                                
-- stdout --
	* [nospam-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-338000" primary control-plane node in "nospam-338000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-338000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-338000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-338000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19409
- KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-338000" primary control-plane node in "nospam-338000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-338000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.81s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-369000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.8982815s)

                                                
                                                
-- stdout --
	* [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-369000" primary control-plane node in "functional-369000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-369000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-369000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19409
- KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-369000" primary control-plane node in "functional-369000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-369000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (67.628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.97s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-369000 --alsologtostderr -v=8: exit status 80 (5.178098125s)

                                                
                                                
-- stdout --
	* [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-369000" primary control-plane node in "functional-369000" cluster
	* Restarting existing qemu2 VM for "functional-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:20:28.490111    7075 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:20:28.490230    7075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:20:28.490234    7075 out.go:304] Setting ErrFile to fd 2...
	I0812 03:20:28.490236    7075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:20:28.490361    7075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:20:28.491341    7075 out.go:298] Setting JSON to false
	I0812 03:20:28.507405    7075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4798,"bootTime":1723453230,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:20:28.507485    7075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:20:28.511990    7075 out.go:177] * [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:20:28.519037    7075 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:20:28.519127    7075 notify.go:220] Checking for updates...
	I0812 03:20:28.526036    7075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:20:28.529007    7075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:20:28.531988    7075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:20:28.535009    7075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:20:28.538017    7075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:20:28.541285    7075 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:20:28.541334    7075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:20:28.545977    7075 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:20:28.552981    7075 start.go:297] selected driver: qemu2
	I0812 03:20:28.552987    7075 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:20:28.553030    7075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:20:28.555346    7075 cni.go:84] Creating CNI manager for ""
	I0812 03:20:28.555361    7075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:20:28.555401    7075 start.go:340] cluster config:
	{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:20:28.558994    7075 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:20:28.565817    7075 out.go:177] * Starting "functional-369000" primary control-plane node in "functional-369000" cluster
	I0812 03:20:28.569092    7075 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:20:28.569108    7075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:20:28.569118    7075 cache.go:56] Caching tarball of preloaded images
	I0812 03:20:28.569220    7075 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:20:28.569226    7075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:20:28.569290    7075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/functional-369000/config.json ...
	I0812 03:20:28.569808    7075 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:20:28.569838    7075 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "functional-369000"
	I0812 03:20:28.569849    7075 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:20:28.569858    7075 fix.go:54] fixHost starting: 
	I0812 03:20:28.569979    7075 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
	W0812 03:20:28.569988    7075 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:20:28.578007    7075 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
	I0812 03:20:28.580919    7075 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:20:28.580969    7075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
	I0812 03:20:28.583031    7075 main.go:141] libmachine: STDOUT: 
	I0812 03:20:28.583053    7075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:20:28.583084    7075 fix.go:56] duration metric: took 13.229583ms for fixHost
	I0812 03:20:28.583088    7075 start.go:83] releasing machines lock for "functional-369000", held for 13.245625ms
	W0812 03:20:28.583097    7075 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:20:28.583137    7075 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:20:28.583142    7075 start.go:729] Will try again in 5 seconds ...
	I0812 03:20:33.585217    7075 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:20:33.585570    7075 start.go:364] duration metric: took 275.167µs to acquireMachinesLock for "functional-369000"
	I0812 03:20:33.585702    7075 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:20:33.585721    7075 fix.go:54] fixHost starting: 
	I0812 03:20:33.586409    7075 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
	W0812 03:20:33.586432    7075 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:20:33.590840    7075 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
	I0812 03:20:33.594662    7075 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:20:33.594878    7075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
	I0812 03:20:33.603456    7075 main.go:141] libmachine: STDOUT: 
	I0812 03:20:33.603511    7075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:20:33.603577    7075 fix.go:56] duration metric: took 17.857667ms for fixHost
	I0812 03:20:33.603594    7075 start.go:83] releasing machines lock for "functional-369000", held for 18.001625ms
	W0812 03:20:33.603752    7075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:20:33.609923    7075 out.go:177] 
	W0812 03:20:33.613769    7075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:20:33.613818    7075 out.go:239] * 
	* 
	W0812 03:20:33.616291    7075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:20:33.624801    7075 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-369000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.179938375s for "functional-369000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (66.233875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.654958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-369000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.176875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-369000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-369000 get po -A: exit status 1 (25.977667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-369000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-369000\n"*: args "kubectl --context functional-369000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-369000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (28.567334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl images: exit status 83 (39.725667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.877833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-369000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.044167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.834083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-369000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 kubectl -- --context functional-369000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 kubectl -- --context functional-369000 get pods: exit status 1 (699.183667ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-369000
	* no server found for cluster "functional-369000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-369000 kubectl -- --context functional-369000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (31.860792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-369000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-369000 get pods: exit status 1 (944.348875ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-369000
	* no server found for cluster "functional-369000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-369000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.160375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-369000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.194746542s)

                                                
                                                
-- stdout --
	* [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-369000" primary control-plane node in "functional-369000" cluster
	* Restarting existing qemu2 VM for "functional-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-369000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.195361334s for "functional-369000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (67.276541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-369000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-369000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.73925ms)

                                                
                                                
** stderr ** 
	error: context "functional-369000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-369000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (30.01875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 logs: exit status 83 (74.504375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-858000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-681000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-833000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | --download-only -p                                                       | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | binary-mirror-249000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51037                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-249000                                                  | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| addons  | enable dashboard -p                                                      | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | addons-717000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | addons-717000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-717000 --wait=true                                             | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-717000                                                         | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	| start   | -p nospam-338000 -n=1 --memory=2250 --wait=false                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-338000                                                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
	| cache   | functional-369000 cache delete                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	| ssh     | functional-369000 ssh sudo                                               | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-369000                                                        | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-369000 cache reload                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-369000 kubectl --                                             | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | --context functional-369000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 03:20:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 03:20:38.708582    7151 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:20:38.708722    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:20:38.708724    7151 out.go:304] Setting ErrFile to fd 2...
	I0812 03:20:38.708725    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:20:38.708850    7151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:20:38.709886    7151 out.go:298] Setting JSON to false
	I0812 03:20:38.725763    7151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4808,"bootTime":1723453230,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:20:38.725823    7151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:20:38.731252    7151 out.go:177] * [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:20:38.740283    7151 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:20:38.740302    7151 notify.go:220] Checking for updates...
	I0812 03:20:38.749173    7151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:20:38.752199    7151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:20:38.753540    7151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:20:38.756159    7151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:20:38.759331    7151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:20:38.762495    7151 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:20:38.762546    7151 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:20:38.767086    7151 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:20:38.774146    7151 start.go:297] selected driver: qemu2
	I0812 03:20:38.774150    7151 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:20:38.774190    7151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:20:38.776439    7151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:20:38.776458    7151 cni.go:84] Creating CNI manager for ""
	I0812 03:20:38.776465    7151 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:20:38.776508    7151 start.go:340] cluster config:
	{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:20:38.780268    7151 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:20:38.791141    7151 out.go:177] * Starting "functional-369000" primary control-plane node in "functional-369000" cluster
	I0812 03:20:38.796116    7151 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:20:38.796130    7151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:20:38.796139    7151 cache.go:56] Caching tarball of preloaded images
	I0812 03:20:38.796206    7151 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:20:38.796210    7151 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:20:38.796290    7151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/functional-369000/config.json ...
	I0812 03:20:38.796636    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:20:38.796675    7151 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "functional-369000"
	I0812 03:20:38.796684    7151 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:20:38.796691    7151 fix.go:54] fixHost starting: 
	I0812 03:20:38.796820    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
	W0812 03:20:38.796827    7151 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:20:38.800088    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
	I0812 03:20:38.811211    7151 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:20:38.811256    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
	I0812 03:20:38.813366    7151 main.go:141] libmachine: STDOUT: 
	I0812 03:20:38.813383    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:20:38.813410    7151 fix.go:56] duration metric: took 16.72125ms for fixHost
	I0812 03:20:38.813414    7151 start.go:83] releasing machines lock for "functional-369000", held for 16.735666ms
	W0812 03:20:38.813418    7151 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:20:38.813457    7151 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:20:38.813462    7151 start.go:729] Will try again in 5 seconds ...
	I0812 03:20:43.815642    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:20:43.816069    7151 start.go:364] duration metric: took 352.292µs to acquireMachinesLock for "functional-369000"
	I0812 03:20:43.816188    7151 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:20:43.816202    7151 fix.go:54] fixHost starting: 
	I0812 03:20:43.816982    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
	W0812 03:20:43.816999    7151 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:20:43.825493    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
	I0812 03:20:43.830477    7151 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:20:43.830672    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
	I0812 03:20:43.840493    7151 main.go:141] libmachine: STDOUT: 
	I0812 03:20:43.840534    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:20:43.840624    7151 fix.go:56] duration metric: took 24.424917ms for fixHost
	I0812 03:20:43.840638    7151 start.go:83] releasing machines lock for "functional-369000", held for 24.555042ms
	W0812 03:20:43.840799    7151 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:20:43.848495    7151 out.go:177] 
	W0812 03:20:43.852553    7151 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:20:43.852573    7151 out.go:239] * 
	W0812 03:20:43.855024    7151 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:20:43.863305    7151 out.go:177] 
	
	
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-369000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-858000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-681000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-833000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | binary-mirror-249000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51037                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-249000                                                  | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | addons-717000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | addons-717000                                                            |                      |         |         |                     |                     |
| start   | -p addons-717000 --wait=true                                             | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-717000                                                         | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| start   | -p nospam-338000 -n=1 --memory=2250 --wait=false                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-338000                                                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
| cache   | functional-369000 cache delete                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| ssh     | functional-369000 ssh sudo                                               | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-369000                                                        | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-369000 cache reload                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-369000 kubectl --                                             | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --context functional-369000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/12 03:20:38
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0812 03:20:38.708582    7151 out.go:291] Setting OutFile to fd 1 ...
I0812 03:20:38.708722    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:38.708724    7151 out.go:304] Setting ErrFile to fd 2...
I0812 03:20:38.708725    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:38.708850    7151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:20:38.709886    7151 out.go:298] Setting JSON to false
I0812 03:20:38.725763    7151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4808,"bootTime":1723453230,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0812 03:20:38.725823    7151 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0812 03:20:38.731252    7151 out.go:177] * [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0812 03:20:38.740283    7151 out.go:177]   - MINIKUBE_LOCATION=19409
I0812 03:20:38.740302    7151 notify.go:220] Checking for updates...
I0812 03:20:38.749173    7151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
I0812 03:20:38.752199    7151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0812 03:20:38.753540    7151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0812 03:20:38.756159    7151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
I0812 03:20:38.759331    7151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0812 03:20:38.762495    7151 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:20:38.762546    7151 driver.go:392] Setting default libvirt URI to qemu:///system
I0812 03:20:38.767086    7151 out.go:177] * Using the qemu2 driver based on existing profile
I0812 03:20:38.774146    7151 start.go:297] selected driver: qemu2
I0812 03:20:38.774150    7151 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 03:20:38.774190    7151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0812 03:20:38.776439    7151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0812 03:20:38.776458    7151 cni.go:84] Creating CNI manager for ""
I0812 03:20:38.776465    7151 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0812 03:20:38.776508    7151 start.go:340] cluster config:
{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 03:20:38.780268    7151 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0812 03:20:38.791141    7151 out.go:177] * Starting "functional-369000" primary control-plane node in "functional-369000" cluster
I0812 03:20:38.796116    7151 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0812 03:20:38.796130    7151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0812 03:20:38.796139    7151 cache.go:56] Caching tarball of preloaded images
I0812 03:20:38.796206    7151 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0812 03:20:38.796210    7151 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0812 03:20:38.796290    7151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/functional-369000/config.json ...
I0812 03:20:38.796636    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0812 03:20:38.796675    7151 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "functional-369000"
I0812 03:20:38.796684    7151 start.go:96] Skipping create...Using existing machine configuration
I0812 03:20:38.796691    7151 fix.go:54] fixHost starting: 
I0812 03:20:38.796820    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
W0812 03:20:38.796827    7151 fix.go:138] unexpected machine state, will restart: <nil>
I0812 03:20:38.800088    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
I0812 03:20:38.811211    7151 qemu.go:418] Using hvf for hardware acceleration
I0812 03:20:38.811256    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
I0812 03:20:38.813366    7151 main.go:141] libmachine: STDOUT: 
I0812 03:20:38.813383    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0812 03:20:38.813410    7151 fix.go:56] duration metric: took 16.72125ms for fixHost
I0812 03:20:38.813414    7151 start.go:83] releasing machines lock for "functional-369000", held for 16.735666ms
W0812 03:20:38.813418    7151 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0812 03:20:38.813457    7151 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0812 03:20:38.813462    7151 start.go:729] Will try again in 5 seconds ...
I0812 03:20:43.815642    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0812 03:20:43.816069    7151 start.go:364] duration metric: took 352.292µs to acquireMachinesLock for "functional-369000"
I0812 03:20:43.816188    7151 start.go:96] Skipping create...Using existing machine configuration
I0812 03:20:43.816202    7151 fix.go:54] fixHost starting: 
I0812 03:20:43.816982    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
W0812 03:20:43.816999    7151 fix.go:138] unexpected machine state, will restart: <nil>
I0812 03:20:43.825493    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
I0812 03:20:43.830477    7151 qemu.go:418] Using hvf for hardware acceleration
I0812 03:20:43.830672    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
I0812 03:20:43.840493    7151 main.go:141] libmachine: STDOUT: 
I0812 03:20:43.840534    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0812 03:20:43.840624    7151 fix.go:56] duration metric: took 24.424917ms for fixHost
I0812 03:20:43.840638    7151 start.go:83] releasing machines lock for "functional-369000", held for 24.555042ms
W0812 03:20:43.840799    7151 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0812 03:20:43.848495    7151 out.go:177] 
W0812 03:20:43.852553    7151 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0812 03:20:43.852573    7151 out.go:239] * 
W0812 03:20:43.855024    7151 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0812 03:20:43.863305    7151 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd630545189/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-858000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-681000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | -p download-only-833000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-858000                                                  | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-681000                                                  | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| delete  | -p download-only-833000                                                  | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | binary-mirror-249000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51037                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-249000                                                  | binary-mirror-249000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | addons-717000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | addons-717000                                                            |                      |         |         |                     |                     |
| start   | -p addons-717000 --wait=true                                             | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-717000                                                         | addons-717000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| start   | -p nospam-338000 -n=1 --memory=2250 --wait=false                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-338000 --log_dir                                                  | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-338000                                                         | nospam-338000        | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-369000 cache add                                              | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
| cache   | functional-369000 cache delete                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | minikube-local-cache-test:functional-369000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| ssh     | functional-369000 ssh sudo                                               | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-369000                                                        | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-369000 cache reload                                           | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
| ssh     | functional-369000 ssh                                                    | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT | 12 Aug 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-369000 kubectl --                                             | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --context functional-369000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-369000                                                     | functional-369000    | jenkins | v1.33.1 | 12 Aug 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/12 03:20:38
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0812 03:20:38.708582    7151 out.go:291] Setting OutFile to fd 1 ...
I0812 03:20:38.708722    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:38.708724    7151 out.go:304] Setting ErrFile to fd 2...
I0812 03:20:38.708725    7151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:38.708850    7151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:20:38.709886    7151 out.go:298] Setting JSON to false
I0812 03:20:38.725763    7151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4808,"bootTime":1723453230,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0812 03:20:38.725823    7151 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0812 03:20:38.731252    7151 out.go:177] * [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0812 03:20:38.740283    7151 out.go:177]   - MINIKUBE_LOCATION=19409
I0812 03:20:38.740302    7151 notify.go:220] Checking for updates...
I0812 03:20:38.749173    7151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
I0812 03:20:38.752199    7151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0812 03:20:38.753540    7151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0812 03:20:38.756159    7151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
I0812 03:20:38.759331    7151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0812 03:20:38.762495    7151 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:20:38.762546    7151 driver.go:392] Setting default libvirt URI to qemu:///system
I0812 03:20:38.767086    7151 out.go:177] * Using the qemu2 driver based on existing profile
I0812 03:20:38.774146    7151 start.go:297] selected driver: qemu2
I0812 03:20:38.774150    7151 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 03:20:38.774190    7151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0812 03:20:38.776439    7151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0812 03:20:38.776458    7151 cni.go:84] Creating CNI manager for ""
I0812 03:20:38.776465    7151 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0812 03:20:38.776508    7151 start.go:340] cluster config:
{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 03:20:38.780268    7151 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0812 03:20:38.791141    7151 out.go:177] * Starting "functional-369000" primary control-plane node in "functional-369000" cluster
I0812 03:20:38.796116    7151 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0812 03:20:38.796130    7151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0812 03:20:38.796139    7151 cache.go:56] Caching tarball of preloaded images
I0812 03:20:38.796206    7151 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0812 03:20:38.796210    7151 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0812 03:20:38.796290    7151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/functional-369000/config.json ...
I0812 03:20:38.796636    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0812 03:20:38.796675    7151 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "functional-369000"
I0812 03:20:38.796684    7151 start.go:96] Skipping create...Using existing machine configuration
I0812 03:20:38.796691    7151 fix.go:54] fixHost starting: 
I0812 03:20:38.796820    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
W0812 03:20:38.796827    7151 fix.go:138] unexpected machine state, will restart: <nil>
I0812 03:20:38.800088    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
I0812 03:20:38.811211    7151 qemu.go:418] Using hvf for hardware acceleration
I0812 03:20:38.811256    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
I0812 03:20:38.813366    7151 main.go:141] libmachine: STDOUT: 
I0812 03:20:38.813383    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0812 03:20:38.813410    7151 fix.go:56] duration metric: took 16.72125ms for fixHost
I0812 03:20:38.813414    7151 start.go:83] releasing machines lock for "functional-369000", held for 16.735666ms
W0812 03:20:38.813418    7151 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0812 03:20:38.813457    7151 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0812 03:20:38.813462    7151 start.go:729] Will try again in 5 seconds ...
I0812 03:20:43.815642    7151 start.go:360] acquireMachinesLock for functional-369000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0812 03:20:43.816069    7151 start.go:364] duration metric: took 352.292µs to acquireMachinesLock for "functional-369000"
I0812 03:20:43.816188    7151 start.go:96] Skipping create...Using existing machine configuration
I0812 03:20:43.816202    7151 fix.go:54] fixHost starting: 
I0812 03:20:43.816982    7151 fix.go:112] recreateIfNeeded on functional-369000: state=Stopped err=<nil>
W0812 03:20:43.816999    7151 fix.go:138] unexpected machine state, will restart: <nil>
I0812 03:20:43.825493    7151 out.go:177] * Restarting existing qemu2 VM for "functional-369000" ...
I0812 03:20:43.830477    7151 qemu.go:418] Using hvf for hardware acceleration
I0812 03:20:43.830672    7151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c9:61:20:3c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/functional-369000/disk.qcow2
I0812 03:20:43.840493    7151 main.go:141] libmachine: STDOUT: 
I0812 03:20:43.840534    7151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0812 03:20:43.840624    7151 fix.go:56] duration metric: took 24.424917ms for fixHost
I0812 03:20:43.840638    7151 start.go:83] releasing machines lock for "functional-369000", held for 24.555042ms
W0812 03:20:43.840799    7151 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0812 03:20:43.848495    7151 out.go:177] 
W0812 03:20:43.852553    7151 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0812 03:20:43.852573    7151 out.go:239] * 
W0812 03:20:43.855024    7151 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0812 03:20:43.863305    7151 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-369000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-369000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.459833ms)

                                                
                                                
** stderr ** 
	error: context "functional-369000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-369000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-369000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-369000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-369000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-369000 --alsologtostderr -v=1] stderr:
I0812 03:21:30.260373    7458 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.260799    7458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.260802    7458 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.260805    7458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.260968    7458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.261194    7458 mustload.go:65] Loading cluster: functional-369000
I0812 03:21:30.261407    7458 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.265958    7458 out.go:177] * The control-plane node functional-369000 host is not running: state=Stopped
I0812 03:21:30.269956    7458 out.go:177]   To start a cluster, run: "minikube start -p functional-369000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (40.625959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 status: exit status 7 (28.665209ms)

                                                
                                                
-- stdout --
	functional-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-369000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (28.547167ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-369000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 status -o json: exit status 7 (29.052708ms)

                                                
                                                
-- stdout --
	{"Name":"functional-369000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-369000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (27.589292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-369000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-369000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.24125ms)

                                                
                                                
** stderr ** 
	error: context "functional-369000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-369000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-369000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-369000 describe po hello-node-connect: exit status 1 (26.946208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-369000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-369000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-369000 logs -l app=hello-node-connect: exit status 1 (26.287958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-369000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-369000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-369000 describe svc hello-node-connect: exit status 1 (26.026292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-369000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (30.550666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-369000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.381209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "echo hello": exit status 83 (49.598417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n"*. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "cat /etc/hostname": exit status 83 (39.3695ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-369000"- but got *"* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n"*. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.079292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (51.086625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.128584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-369000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-369000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cp functional-369000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd919429446/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 cp functional-369000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd919429446/001/cp-test.txt: exit status 83 (40.483125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 cp functional-369000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd919429446/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.906417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd919429446/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.822792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (40.884166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-369000 ssh -n functional-369000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-369000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-369000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/6841/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/test/nested/copy/6841/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/test/nested/copy/6841/hosts": exit status 83 (39.96725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/test/nested/copy/6841/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-369000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-369000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.514875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/6841.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/6841.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/6841.pem": exit status 83 (39.972542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/6841.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /etc/ssl/certs/6841.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6841.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/6841.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /usr/share/ca-certificates/6841.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /usr/share/ca-certificates/6841.pem": exit status 83 (37.051584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/6841.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /usr/share/ca-certificates/6841.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6841.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.448583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/68412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/68412.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/68412.pem": exit status 83 (40.51225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/68412.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /etc/ssl/certs/68412.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/68412.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/68412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /usr/share/ca-certificates/68412.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /usr/share/ca-certificates/68412.pem": exit status 83 (38.693833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/68412.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /usr/share/ca-certificates/68412.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/68412.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.71625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-369000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-369000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (29.761083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-369000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-369000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.94575ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-369000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-369000 -n functional-369000: exit status 7 (30.791166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo systemctl is-active crio": exit status 83 (38.550292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 version -o=json --components: exit status 83 (38.762334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-369000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-369000 image ls --format short --alsologtostderr:
I0812 03:21:30.655950    7473 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.656095    7473 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.656105    7473 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.656107    7473 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.656254    7473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.656665    7473 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.656730    7473 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-369000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-369000 image ls --format table --alsologtostderr:
I0812 03:21:30.761931    7479 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.762075    7479 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.762078    7479 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.762080    7479 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.762232    7479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.762662    7479 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.762724    7479 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-369000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-369000 image ls --format json --alsologtostderr:
I0812 03:21:30.726980    7477 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.727131    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.727134    7477 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.727136    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.727291    7477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.727704    7477 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.727765    7477 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-369000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-369000 image ls --format yaml --alsologtostderr:
I0812 03:21:30.691985    7475 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.692124    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.692127    7475 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.692130    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.692264    7475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.692669    7475 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.692767    7475 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh pgrep buildkitd: exit status 83 (40.979333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image build -t localhost/my-image:functional-369000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-369000 image build -t localhost/my-image:functional-369000 testdata/build --alsologtostderr:
I0812 03:21:30.835473    7483 out.go:291] Setting OutFile to fd 1 ...
I0812 03:21:30.836083    7483 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.836093    7483 out.go:304] Setting ErrFile to fd 2...
I0812 03:21:30.836096    7483 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:21:30.836312    7483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:21:30.836924    7483 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.837338    7483 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:21:30.837582    7483 build_images.go:133] succeeded building to: 
I0812 03:21:30.837586    7483 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
functional_test.go:446: expected "localhost/my-image:functional-369000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-369000 docker-env) && out/minikube-darwin-arm64 status -p functional-369000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-369000 docker-env) && out/minikube-darwin-arm64 status -p functional-369000": exit status 1 (42.719167ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2: exit status 83 (41.872792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:21:30.531890    7467 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:21:30.532465    7467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.532469    7467 out.go:304] Setting ErrFile to fd 2...
	I0812 03:21:30.532471    7467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.532626    7467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:21:30.532891    7467 mustload.go:65] Loading cluster: functional-369000
	I0812 03:21:30.533075    7467 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:21:30.537028    7467 out.go:177] * The control-plane node functional-369000 host is not running: state=Stopped
	I0812 03:21:30.540982    7467 out.go:177]   To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2: exit status 83 (41.744042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:21:30.614998    7471 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:21:30.615147    7471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.615150    7471 out.go:304] Setting ErrFile to fd 2...
	I0812 03:21:30.615153    7471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.615293    7471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:21:30.615533    7471 mustload.go:65] Loading cluster: functional-369000
	I0812 03:21:30.615746    7471 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:21:30.620004    7471 out.go:177] * The control-plane node functional-369000 host is not running: state=Stopped
	I0812 03:21:30.623941    7471 out.go:177]   To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2: exit status 83 (40.510667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:21:30.573419    7469 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:21:30.573587    7469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.573590    7469 out.go:304] Setting ErrFile to fd 2...
	I0812 03:21:30.573592    7469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.573745    7469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:21:30.573976    7469 mustload.go:65] Loading cluster: functional-369000
	I0812 03:21:30.574159    7469 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:21:30.578921    7469 out.go:177] * The control-plane node functional-369000 host is not running: state=Stopped
	I0812 03:21:30.583001    7469 out.go:177]   To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-369000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-369000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-369000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.350041ms)

                                                
                                                
** stderr ** 
	error: context "functional-369000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-369000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 service list: exit status 83 (40.18175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-369000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 service list -o json: exit status 83 (54.180542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-369000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 service --namespace=default --https --url hello-node: exit status 83 (42.535917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-369000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 service hello-node --url --format={{.IP}}: exit status 83 (42.822667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-369000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 service hello-node --url: exit status 83 (41.850167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-369000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test.go:1569: failed to parse "* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"": parse "* The control-plane node functional-369000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-369000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0812 03:20:45.650913    7270 out.go:291] Setting OutFile to fd 1 ...
I0812 03:20:45.651043    7270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:45.651045    7270 out.go:304] Setting ErrFile to fd 2...
I0812 03:20:45.651048    7270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:20:45.651177    7270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:20:45.651398    7270 mustload.go:65] Loading cluster: functional-369000
I0812 03:20:45.651600    7270 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:20:45.655049    7270 out.go:177] * The control-plane node functional-369000 host is not running: state=Stopped
I0812 03:20:45.664967    7270 out.go:177]   To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
stdout: * The control-plane node functional-369000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-369000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7271: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-369000": client config: context "functional-369000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-369000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-369000 get svc nginx-svc: exit status 1 (67.565ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-369000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-369000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image load --daemon kicbase/echo-server:functional-369000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-369000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image load --daemon kicbase/echo-server:functional-369000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-369000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-369000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image load --daemon kicbase/echo-server:functional-369000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-369000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image save kicbase/echo-server:functional-369000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-369000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035835875s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-760000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-760000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.826911625s)

                                                
                                                
-- stdout --
	* [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-760000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:23:36.876273    7526 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:23:36.876487    7526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:23:36.876492    7526 out.go:304] Setting ErrFile to fd 2...
	I0812 03:23:36.876495    7526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:23:36.876617    7526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:23:36.877689    7526 out.go:298] Setting JSON to false
	I0812 03:23:36.893727    7526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4986,"bootTime":1723453230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:23:36.893864    7526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:23:36.900788    7526 out.go:177] * [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:23:36.907664    7526 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:23:36.907712    7526 notify.go:220] Checking for updates...
	I0812 03:23:36.914750    7526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:23:36.917755    7526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:23:36.920685    7526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:23:36.923708    7526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:23:36.926779    7526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:23:36.929859    7526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:23:36.933724    7526 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:23:36.940784    7526 start.go:297] selected driver: qemu2
	I0812 03:23:36.940790    7526 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:23:36.940799    7526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:23:36.942988    7526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:23:36.946695    7526 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:23:36.949819    7526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:23:36.949834    7526 cni.go:84] Creating CNI manager for ""
	I0812 03:23:36.949839    7526 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 03:23:36.949843    7526 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 03:23:36.949874    7526 start.go:340] cluster config:
	{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:23:36.953465    7526 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:23:36.961738    7526 out.go:177] * Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	I0812 03:23:36.965664    7526 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:23:36.965681    7526 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:23:36.965693    7526 cache.go:56] Caching tarball of preloaded images
	I0812 03:23:36.965755    7526 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:23:36.965761    7526 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:23:36.965970    7526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/ha-760000/config.json ...
	I0812 03:23:36.965982    7526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/ha-760000/config.json: {Name:mkfb6dd3bf82e9a48c238bda8db67d5ad20269e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:23:36.966346    7526 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:23:36.966382    7526 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "ha-760000"
	I0812 03:23:36.966394    7526 start.go:93] Provisioning new machine with config: &{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:23:36.966423    7526 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:23:36.973730    7526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:23:36.991936    7526 start.go:159] libmachine.API.Create for "ha-760000" (driver="qemu2")
	I0812 03:23:36.991963    7526 client.go:168] LocalClient.Create starting
	I0812 03:23:36.992048    7526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:23:36.992081    7526 main.go:141] libmachine: Decoding PEM data...
	I0812 03:23:36.992102    7526 main.go:141] libmachine: Parsing certificate...
	I0812 03:23:36.992140    7526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:23:36.992164    7526 main.go:141] libmachine: Decoding PEM data...
	I0812 03:23:36.992176    7526 main.go:141] libmachine: Parsing certificate...
	I0812 03:23:36.992616    7526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:23:37.144735    7526 main.go:141] libmachine: Creating SSH key...
	I0812 03:23:37.231291    7526 main.go:141] libmachine: Creating Disk image...
	I0812 03:23:37.231297    7526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:23:37.231506    7526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:37.240667    7526 main.go:141] libmachine: STDOUT: 
	I0812 03:23:37.240688    7526 main.go:141] libmachine: STDERR: 
	I0812 03:23:37.240746    7526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2 +20000M
	I0812 03:23:37.248562    7526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:23:37.248583    7526 main.go:141] libmachine: STDERR: 
	I0812 03:23:37.248598    7526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:37.248603    7526 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:23:37.248611    7526 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:23:37.248643    7526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:02:05:d3:18:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:37.250241    7526 main.go:141] libmachine: STDOUT: 
	I0812 03:23:37.250260    7526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:23:37.250276    7526 client.go:171] duration metric: took 258.313875ms to LocalClient.Create
	I0812 03:23:39.252422    7526 start.go:128] duration metric: took 2.286015s to createHost
	I0812 03:23:39.252499    7526 start.go:83] releasing machines lock for "ha-760000", held for 2.286144292s
	W0812 03:23:39.252537    7526 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:23:39.267755    7526 out.go:177] * Deleting "ha-760000" in qemu2 ...
	W0812 03:23:39.299704    7526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:23:39.299732    7526 start.go:729] Will try again in 5 seconds ...
	I0812 03:23:44.301945    7526 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:23:44.302473    7526 start.go:364] duration metric: took 422.917µs to acquireMachinesLock for "ha-760000"
	I0812 03:23:44.302595    7526 start.go:93] Provisioning new machine with config: &{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:23:44.302885    7526 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:23:44.318711    7526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:23:44.368937    7526 start.go:159] libmachine.API.Create for "ha-760000" (driver="qemu2")
	I0812 03:23:44.368988    7526 client.go:168] LocalClient.Create starting
	I0812 03:23:44.369113    7526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:23:44.369183    7526 main.go:141] libmachine: Decoding PEM data...
	I0812 03:23:44.369198    7526 main.go:141] libmachine: Parsing certificate...
	I0812 03:23:44.369267    7526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:23:44.369308    7526 main.go:141] libmachine: Decoding PEM data...
	I0812 03:23:44.369322    7526 main.go:141] libmachine: Parsing certificate...
	I0812 03:23:44.369941    7526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:23:44.533352    7526 main.go:141] libmachine: Creating SSH key...
	I0812 03:23:44.611383    7526 main.go:141] libmachine: Creating Disk image...
	I0812 03:23:44.611388    7526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:23:44.611583    7526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:44.620735    7526 main.go:141] libmachine: STDOUT: 
	I0812 03:23:44.620752    7526 main.go:141] libmachine: STDERR: 
	I0812 03:23:44.620789    7526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2 +20000M
	I0812 03:23:44.628674    7526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:23:44.628687    7526 main.go:141] libmachine: STDERR: 
	I0812 03:23:44.628698    7526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:44.628703    7526 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:23:44.628709    7526 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:23:44.628742    7526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:96:10:38:a5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:23:44.630345    7526 main.go:141] libmachine: STDOUT: 
	I0812 03:23:44.630365    7526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:23:44.630379    7526 client.go:171] duration metric: took 261.389209ms to LocalClient.Create
	I0812 03:23:46.632523    7526 start.go:128] duration metric: took 2.329639s to createHost
	I0812 03:23:46.632590    7526 start.go:83] releasing machines lock for "ha-760000", held for 2.330130042s
	W0812 03:23:46.632961    7526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:23:46.641432    7526 out.go:177] 
	W0812 03:23:46.648542    7526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:23:46.648573    7526 out.go:239] * 
	* 
	W0812 03:23:46.650969    7526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:23:46.661402    7526 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-760000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (66.403875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (117.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.488417ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-760000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- rollout status deployment/busybox: exit status 1 (57.371208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.326292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.473875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.067625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.701375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.822791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.463292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.117041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.745542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.256583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.924625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.7915ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.066166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.386125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.665375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.03925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.749292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (117.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-760000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.365333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-760000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (28.878333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-760000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-760000 -v=7 --alsologtostderr: exit status 83 (43.732834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-760000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:44.794233    7619 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:44.794801    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:44.794805    7619 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:44.794807    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:44.794962    7619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:44.795211    7619 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:44.795407    7619 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:44.800155    7619 out.go:177] * The control-plane node ha-760000 host is not running: state=Stopped
	I0812 03:25:44.804237    7619 out.go:177]   To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-760000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (28.633833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-760000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-760000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.657459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-760000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-760000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-760000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.13425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-760000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-760000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.508375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status --output json -v=7 --alsologtostderr: exit status 7 (29.382625ms)

                                                
                                                
-- stdout --
	{"Name":"ha-760000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:44.997467    7631 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:44.997627    7631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:44.997630    7631 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:44.997632    7631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:44.997765    7631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:44.997875    7631 out.go:298] Setting JSON to true
	I0812 03:25:44.997885    7631 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:44.997941    7631 notify.go:220] Checking for updates...
	I0812 03:25:44.998105    7631 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:44.998119    7631 status.go:255] checking status of ha-760000 ...
	I0812 03:25:44.998327    7631 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:44.998331    7631 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:44.998333    7631 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-760000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.210584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.805625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:45.057188    7635 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:45.057782    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.057785    7635 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:45.057788    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.057960    7635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:45.058230    7635 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:45.058434    7635 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:45.062987    7635 out.go:177] 
	W0812 03:25:45.065992    7635 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0812 03:25:45.065996    7635 out.go:239] * 
	* 
	W0812 03:25:45.067952    7635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:25:45.071959    7635 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-760000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (29.094875ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:45.104390    7637 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:45.104536    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.104539    7637 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:45.104542    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.104659    7637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:45.104772    7637 out.go:298] Setting JSON to false
	I0812 03:25:45.104782    7637 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:45.104826    7637 notify.go:220] Checking for updates...
	I0812 03:25:45.104998    7637 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:45.105005    7637 status.go:255] checking status of ha-760000 ...
	I0812 03:25:45.105210    7637 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:45.105214    7637 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:45.105216    7637 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.147375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-760000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (28.793208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.786875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:45.238591    7646 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:45.239108    7646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.239112    7646 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:45.239119    7646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.239265    7646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:45.239488    7646 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:45.239676    7646 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:45.243931    7646 out.go:177] 
	W0812 03:25:45.248005    7646 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0812 03:25:45.248011    7646 out.go:239] * 
	* 
	W0812 03:25:45.249944    7646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:25:45.253862    7646 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0812 03:25:45.238591    7646 out.go:291] Setting OutFile to fd 1 ...
I0812 03:25:45.239108    7646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:25:45.239112    7646 out.go:304] Setting ErrFile to fd 2...
I0812 03:25:45.239119    7646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:25:45.239265    7646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:25:45.239488    7646 mustload.go:65] Loading cluster: ha-760000
I0812 03:25:45.239676    7646 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:25:45.243931    7646 out.go:177] 
W0812 03:25:45.248005    7646 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0812 03:25:45.248011    7646 out.go:239] * 
* 
W0812 03:25:45.249944    7646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0812 03:25:45.253862    7646 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-760000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (29.186209ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:45.286525    7648 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:45.286682    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.286685    7648 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:45.286687    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:45.286816    7648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:45.286929    7648 out.go:298] Setting JSON to false
	I0812 03:25:45.286942    7648 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:45.286997    7648 notify.go:220] Checking for updates...
	I0812 03:25:45.287123    7648 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:45.287130    7648 status.go:255] checking status of ha-760000 ...
	I0812 03:25:45.287351    7648 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:45.287355    7648 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:45.287357    7648 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (72.312ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:46.540414    7650 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:46.540598    7650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:46.540602    7650 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:46.540606    7650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:46.540790    7650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:46.540946    7650 out.go:298] Setting JSON to false
	I0812 03:25:46.540959    7650 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:46.541004    7650 notify.go:220] Checking for updates...
	I0812 03:25:46.541242    7650 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:46.541251    7650 status.go:255] checking status of ha-760000 ...
	I0812 03:25:46.541526    7650 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:46.541531    7650 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:46.541534    7650 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (73.624291ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:48.856428    7652 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:48.856636    7652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:48.856640    7652 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:48.856644    7652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:48.856810    7652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:48.856977    7652 out.go:298] Setting JSON to false
	I0812 03:25:48.856994    7652 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:48.857028    7652 notify.go:220] Checking for updates...
	I0812 03:25:48.857278    7652 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:48.857292    7652 status.go:255] checking status of ha-760000 ...
	I0812 03:25:48.857612    7652 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:48.857617    7652 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:48.857620    7652 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (75.68075ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:51.770907    7654 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:51.771093    7654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:51.771098    7654 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:51.771101    7654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:51.771301    7654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:51.771472    7654 out.go:298] Setting JSON to false
	I0812 03:25:51.771486    7654 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:51.771519    7654 notify.go:220] Checking for updates...
	I0812 03:25:51.771747    7654 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:51.771756    7654 status.go:255] checking status of ha-760000 ...
	I0812 03:25:51.772066    7654 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:51.772071    7654 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:51.772074    7654 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (76.5435ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:56.018949    7656 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:56.019188    7656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:56.019193    7656 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:56.019196    7656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:56.019367    7656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:56.019527    7656 out.go:298] Setting JSON to false
	I0812 03:25:56.019542    7656 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:56.019582    7656 notify.go:220] Checking for updates...
	I0812 03:25:56.019812    7656 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:56.019826    7656 status.go:255] checking status of ha-760000 ...
	I0812 03:25:56.020097    7656 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:56.020102    7656 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:56.020105    7656 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (72.785ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:25:59.607295    7658 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:25:59.607509    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:59.607514    7658 out.go:304] Setting ErrFile to fd 2...
	I0812 03:25:59.607517    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:25:59.607696    7658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:25:59.607862    7658 out.go:298] Setting JSON to false
	I0812 03:25:59.607873    7658 mustload.go:65] Loading cluster: ha-760000
	I0812 03:25:59.607917    7658 notify.go:220] Checking for updates...
	I0812 03:25:59.608147    7658 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:25:59.608155    7658 status.go:255] checking status of ha-760000 ...
	I0812 03:25:59.608447    7658 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:25:59.608452    7658 status.go:343] host is not running, skipping remaining checks
	I0812 03:25:59.608455    7658 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (73.640083ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:07.139552    7663 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:07.139774    7663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:07.139778    7663 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:07.139788    7663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:07.139975    7663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:07.140156    7663 out.go:298] Setting JSON to false
	I0812 03:26:07.140172    7663 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:07.140222    7663 notify.go:220] Checking for updates...
	I0812 03:26:07.140486    7663 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:07.140497    7663 status.go:255] checking status of ha-760000 ...
	I0812 03:26:07.140791    7663 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:26:07.140796    7663 status.go:343] host is not running, skipping remaining checks
	I0812 03:26:07.140800    7663 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (71.585166ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:22.083050    7667 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:22.083209    7667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:22.083214    7667 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:22.083221    7667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:22.083384    7667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:22.083553    7667 out.go:298] Setting JSON to false
	I0812 03:26:22.083571    7667 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:22.083596    7667 notify.go:220] Checking for updates...
	I0812 03:26:22.083821    7667 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:22.083830    7667 status.go:255] checking status of ha-760000 ...
	I0812 03:26:22.084102    7667 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:26:22.084107    7667 status.go:343] host is not running, skipping remaining checks
	I0812 03:26:22.084110    7667 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (70.93525ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:44.773847    7679 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:44.774074    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:44.774080    7679 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:44.774083    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:44.774255    7679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:44.774436    7679 out.go:298] Setting JSON to false
	I0812 03:26:44.774451    7679 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:44.774496    7679 notify.go:220] Checking for updates...
	I0812 03:26:44.774736    7679 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:44.774749    7679 status.go:255] checking status of ha-760000 ...
	I0812 03:26:44.775062    7679 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:26:44.775067    7679 status.go:343] host is not running, skipping remaining checks
	I0812 03:26:44.775070    7679 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (33.728833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-760000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-760000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.234958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-760000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-760000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-760000 -v=7 --alsologtostderr: (3.03613375s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-760000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-760000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226879167s)

                                                
                                                
-- stdout --
	* [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	* Restarting existing qemu2 VM for "ha-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:48.015199    7710 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:48.015363    7710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:48.015367    7710 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:48.015370    7710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:48.015548    7710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:48.016762    7710 out.go:298] Setting JSON to false
	I0812 03:26:48.036476    7710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5178,"bootTime":1723453230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:26:48.036542    7710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:26:48.041630    7710 out.go:177] * [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:26:48.049764    7710 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:26:48.049791    7710 notify.go:220] Checking for updates...
	I0812 03:26:48.056710    7710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:26:48.059759    7710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:26:48.062687    7710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:26:48.065705    7710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:26:48.068737    7710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:26:48.072073    7710 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:48.072133    7710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:26:48.076662    7710 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:26:48.082765    7710 start.go:297] selected driver: qemu2
	I0812 03:26:48.082772    7710 start.go:901] validating driver "qemu2" against &{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:26:48.082835    7710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:26:48.085102    7710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:26:48.085141    7710 cni.go:84] Creating CNI manager for ""
	I0812 03:26:48.085145    7710 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 03:26:48.085205    7710 start.go:340] cluster config:
	{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:26:48.088715    7710 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:26:48.100053    7710 out.go:177] * Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	I0812 03:26:48.103717    7710 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:26:48.103734    7710 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:26:48.103745    7710 cache.go:56] Caching tarball of preloaded images
	I0812 03:26:48.103811    7710 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:26:48.103818    7710 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:26:48.103885    7710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/ha-760000/config.json ...
	I0812 03:26:48.104376    7710 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:26:48.104416    7710 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "ha-760000"
	I0812 03:26:48.104427    7710 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:26:48.104432    7710 fix.go:54] fixHost starting: 
	I0812 03:26:48.104559    7710 fix.go:112] recreateIfNeeded on ha-760000: state=Stopped err=<nil>
	W0812 03:26:48.104569    7710 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:26:48.112643    7710 out.go:177] * Restarting existing qemu2 VM for "ha-760000" ...
	I0812 03:26:48.116666    7710 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:26:48.116705    7710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:96:10:38:a5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:26:48.118957    7710 main.go:141] libmachine: STDOUT: 
	I0812 03:26:48.118993    7710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:26:48.119022    7710 fix.go:56] duration metric: took 14.5895ms for fixHost
	I0812 03:26:48.119027    7710 start.go:83] releasing machines lock for "ha-760000", held for 14.605708ms
	W0812 03:26:48.119036    7710 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:26:48.119070    7710 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:26:48.119075    7710 start.go:729] Will try again in 5 seconds ...
	I0812 03:26:53.121174    7710 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:26:53.121530    7710 start.go:364] duration metric: took 268.083µs to acquireMachinesLock for "ha-760000"
	I0812 03:26:53.121642    7710 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:26:53.121688    7710 fix.go:54] fixHost starting: 
	I0812 03:26:53.122311    7710 fix.go:112] recreateIfNeeded on ha-760000: state=Stopped err=<nil>
	W0812 03:26:53.122334    7710 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:26:53.126820    7710 out.go:177] * Restarting existing qemu2 VM for "ha-760000" ...
	I0812 03:26:53.133756    7710 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:26:53.133937    7710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:96:10:38:a5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:26:53.142623    7710 main.go:141] libmachine: STDOUT: 
	I0812 03:26:53.142687    7710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:26:53.142751    7710 fix.go:56] duration metric: took 21.091042ms for fixHost
	I0812 03:26:53.142764    7710 start.go:83] releasing machines lock for "ha-760000", held for 21.21375ms
	W0812 03:26:53.142936    7710 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:26:53.150830    7710 out.go:177] 
	W0812 03:26:53.154826    7710 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:26:53.154856    7710 out.go:239] * 
	* 
	W0812 03:26:53.157733    7710 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:26:53.163700    7710 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-760000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-760000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (33.23875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 node delete m03 -v=7 --alsologtostderr: exit status 83 (37.964584ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-760000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:53.306553    7722 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:53.306984    7722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:53.306987    7722 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:53.306990    7722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:53.307155    7722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:53.307380    7722 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:53.307575    7722 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:53.311723    7722 out.go:177] * The control-plane node ha-760000 host is not running: state=Stopped
	I0812 03:26:53.312881    7722 out.go:177]   To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-760000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (29.670834ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:53.344672    7724 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:53.344818    7724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:53.344821    7724 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:53.344824    7724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:53.344961    7724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:53.345079    7724 out.go:298] Setting JSON to false
	I0812 03:26:53.345089    7724 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:53.345148    7724 notify.go:220] Checking for updates...
	I0812 03:26:53.345294    7724 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:53.345301    7724 status.go:255] checking status of ha-760000 ...
	I0812 03:26:53.345493    7724 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:26:53.345497    7724 status.go:343] host is not running, skipping remaining checks
	I0812 03:26:53.345499    7724 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.314042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-760000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.456916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-760000 stop -v=7 --alsologtostderr: (3.199085s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr: exit status 7 (65.896417ms)

                                                
                                                
-- stdout --
	ha-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:56.715430    7753 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:56.715622    7753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:56.715627    7753 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:56.715630    7753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:56.715800    7753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:56.715971    7753 out.go:298] Setting JSON to false
	I0812 03:26:56.715984    7753 mustload.go:65] Loading cluster: ha-760000
	I0812 03:26:56.716026    7753 notify.go:220] Checking for updates...
	I0812 03:26:56.716230    7753 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:56.716243    7753 status.go:255] checking status of ha-760000 ...
	I0812 03:26:56.716501    7753 status.go:330] ha-760000 host status = "Stopped" (err=<nil>)
	I0812 03:26:56.716506    7753 status.go:343] host is not running, skipping remaining checks
	I0812 03:26:56.716509    7753 status.go:257] ha-760000 status: &{Name:ha-760000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-760000 status -v=7 --alsologtostderr": ha-760000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (31.337083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-760000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-760000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180627125s)

                                                
                                                
-- stdout --
	* [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	* Restarting existing qemu2 VM for "ha-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:26:56.776363    7757 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:26:56.776485    7757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:56.776491    7757 out.go:304] Setting ErrFile to fd 2...
	I0812 03:26:56.776494    7757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:26:56.776612    7757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:26:56.777596    7757 out.go:298] Setting JSON to false
	I0812 03:26:56.793687    7757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5186,"bootTime":1723453230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:26:56.793759    7757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:26:56.797661    7757 out.go:177] * [ha-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:26:56.804402    7757 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:26:56.804454    7757 notify.go:220] Checking for updates...
	I0812 03:26:56.811404    7757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:26:56.814356    7757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:26:56.817421    7757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:26:56.820269    7757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:26:56.823433    7757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:26:56.826705    7757 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:26:56.826977    7757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:26:56.830267    7757 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:26:56.837336    7757 start.go:297] selected driver: qemu2
	I0812 03:26:56.837344    7757 start.go:901] validating driver "qemu2" against &{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:26:56.837426    7757 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:26:56.839754    7757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:26:56.839776    7757 cni.go:84] Creating CNI manager for ""
	I0812 03:26:56.839781    7757 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 03:26:56.839829    7757 start.go:340] cluster config:
	{Name:ha-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-760000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:26:56.843361    7757 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:26:56.851376    7757 out.go:177] * Starting "ha-760000" primary control-plane node in "ha-760000" cluster
	I0812 03:26:56.855432    7757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:26:56.855449    7757 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:26:56.855458    7757 cache.go:56] Caching tarball of preloaded images
	I0812 03:26:56.855511    7757 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:26:56.855517    7757 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:26:56.855590    7757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/ha-760000/config.json ...
	I0812 03:26:56.856064    7757 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:26:56.856092    7757 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "ha-760000"
	I0812 03:26:56.856105    7757 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:26:56.856113    7757 fix.go:54] fixHost starting: 
	I0812 03:26:56.856235    7757 fix.go:112] recreateIfNeeded on ha-760000: state=Stopped err=<nil>
	W0812 03:26:56.856243    7757 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:26:56.864362    7757 out.go:177] * Restarting existing qemu2 VM for "ha-760000" ...
	I0812 03:26:56.868358    7757 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:26:56.868396    7757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:96:10:38:a5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:26:56.870592    7757 main.go:141] libmachine: STDOUT: 
	I0812 03:26:56.870614    7757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:26:56.870642    7757 fix.go:56] duration metric: took 14.530584ms for fixHost
	I0812 03:26:56.870647    7757 start.go:83] releasing machines lock for "ha-760000", held for 14.551333ms
	W0812 03:26:56.870653    7757 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:26:56.870694    7757 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:26:56.870699    7757 start.go:729] Will try again in 5 seconds ...
	I0812 03:27:01.872855    7757 start.go:360] acquireMachinesLock for ha-760000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:27:01.873343    7757 start.go:364] duration metric: took 333.958µs to acquireMachinesLock for "ha-760000"
	I0812 03:27:01.873483    7757 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:27:01.873505    7757 fix.go:54] fixHost starting: 
	I0812 03:27:01.874237    7757 fix.go:112] recreateIfNeeded on ha-760000: state=Stopped err=<nil>
	W0812 03:27:01.874264    7757 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:27:01.881743    7757 out.go:177] * Restarting existing qemu2 VM for "ha-760000" ...
	I0812 03:27:01.885674    7757 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:27:01.885937    7757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:96:10:38:a5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/ha-760000/disk.qcow2
	I0812 03:27:01.895084    7757 main.go:141] libmachine: STDOUT: 
	I0812 03:27:01.895153    7757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:27:01.895233    7757 fix.go:56] duration metric: took 21.733584ms for fixHost
	I0812 03:27:01.895253    7757 start.go:83] releasing machines lock for "ha-760000", held for 21.888542ms
	W0812 03:27:01.895425    7757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:27:01.902737    7757 out.go:177] 
	W0812 03:27:01.905640    7757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:27:01.905675    7757 out.go:239] * 
	* 
	W0812 03:27:01.908103    7757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:27:01.917708    7757 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-760000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (68.788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-760000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.660625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-760000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-760000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.77525ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-760000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:27:02.107699    7774 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:27:02.107857    7774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:27:02.107861    7774 out.go:304] Setting ErrFile to fd 2...
	I0812 03:27:02.107863    7774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:27:02.107986    7774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:27:02.108213    7774 mustload.go:65] Loading cluster: ha-760000
	I0812 03:27:02.108397    7774 config.go:182] Loaded profile config "ha-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:27:02.112979    7774 out.go:177] * The control-plane node ha-760000 host is not running: state=Stopped
	I0812 03:27:02.116977    7774 out.go:177]   To start a cluster, run: "minikube start -p ha-760000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-760000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.288708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-760000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-760000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-760000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-760000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-760000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-760000 -n ha-760000: exit status 7 (29.361459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-820000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-820000 --driver=qemu2 : exit status 80 (9.766814625s)

                                                
                                                
-- stdout --
	* [image-820000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-820000" primary control-plane node in "image-820000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-820000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-820000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-820000 -n image-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-820000 -n image-820000: exit status 7 (68.34075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-471000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-471000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.793390875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f81155e-cf47-432c-9733-fff239400bcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-471000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f58b092-a50b-4fb3-a73d-70194568ca90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19409"}}
	{"specversion":"1.0","id":"79739465-e98d-4f6c-8d13-d3838b97b634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig"}}
	{"specversion":"1.0","id":"309afd05-5744-4ce1-be53-053f5a2dd3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3fff3ef8-8db7-4ee2-bc82-0a522b619318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d477a98-aa05-445b-bf92-83269465c2e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube"}}
	{"specversion":"1.0","id":"74d4b857-bd60-470a-bd45-797aeb9aed1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e33e4ee7-e507-4b08-b1f9-04fc31be7aba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"786a0422-5da3-4a7f-8638-6e5cff117082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ea75758a-6f0f-47ec-a363-ccd7ce7d67e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-471000\" primary control-plane node in \"json-output-471000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"477182b6-4e10-4755-af14-c3d98705069a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"17ed5a13-3d76-4a85-959f-254648c62bbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-471000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"101f51d1-8f0c-4d58-9af3-c242c91e1c3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b614384f-22fe-4bf6-a28a-97a5f2feea33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"57d13ee5-c8d7-4955-b0d7-6d1b710e731b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-471000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"727e0b9d-be87-4c60-b95d-b3f5e21167e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"973c9aa0-3f0d-46b6-a35f-17d90b76f83d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-471000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-471000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-471000 --output=json --user=testUser: exit status 83 (78.631458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52eb95be-befc-48a2-a2ee-d146393d1799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-471000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"4d7ffd30-b90c-49ae-8e14-2e62a9bfc7ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-471000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-471000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-471000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-471000 --output=json --user=testUser: exit status 83 (44.194583ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-471000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-471000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-471000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-471000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-120000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-120000 --driver=qemu2 : exit status 80 (10.051642792s)

                                                
                                                
-- stdout --
	* [first-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-120000" primary control-plane node in "first-120000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-120000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-120000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-12 03:27:35.92537 -0700 PDT m=+500.114296001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-122000 -n second-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-122000 -n second-122000: exit status 85 (79.740375ms)

                                                
                                                
-- stdout --
	* Profile "second-122000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-122000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-122000" host is not running, skipping log retrieval (state="* Profile \"second-122000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-122000\"")
helpers_test.go:175: Cleaning up "second-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-122000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-12 03:27:36.118949 -0700 PDT m=+500.307878210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-120000 -n first-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-120000 -n first-120000: exit status 7 (29.411334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-120000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-120000
--- FAIL: TestMinikubeProfile (10.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-216000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-216000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.937273459s)

                                                
                                                
-- stdout --
	* [mount-start-1-216000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-216000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-216000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-216000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-216000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-216000 -n mount-start-1-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-216000 -n mount-start-1-216000: exit status 7 (67.638667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.859766667s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:27:46.439940    7934 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:27:46.440064    7934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:27:46.440068    7934 out.go:304] Setting ErrFile to fd 2...
	I0812 03:27:46.440074    7934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:27:46.440193    7934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:27:46.441230    7934 out.go:298] Setting JSON to false
	I0812 03:27:46.457235    7934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5236,"bootTime":1723453230,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:27:46.457311    7934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:27:46.463786    7934 out.go:177] * [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:27:46.471790    7934 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:27:46.471841    7934 notify.go:220] Checking for updates...
	I0812 03:27:46.478776    7934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:27:46.481711    7934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:27:46.484753    7934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:27:46.487777    7934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:27:46.490710    7934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:27:46.493846    7934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:27:46.497767    7934 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:27:46.504762    7934 start.go:297] selected driver: qemu2
	I0812 03:27:46.504768    7934 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:27:46.504774    7934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:27:46.507144    7934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:27:46.509780    7934 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:27:46.512811    7934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:27:46.512848    7934 cni.go:84] Creating CNI manager for ""
	I0812 03:27:46.512854    7934 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 03:27:46.512858    7934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 03:27:46.512898    7934 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:27:46.516728    7934 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:27:46.524790    7934 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0812 03:27:46.528708    7934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:27:46.528728    7934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:27:46.528741    7934 cache.go:56] Caching tarball of preloaded images
	I0812 03:27:46.528831    7934 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:27:46.528837    7934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:27:46.529079    7934 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/multinode-552000/config.json ...
	I0812 03:27:46.529091    7934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/multinode-552000/config.json: {Name:mkd410ef0743621078d491de58117a17c80bc784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:27:46.529310    7934 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:27:46.529347    7934 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "multinode-552000"
	I0812 03:27:46.529362    7934 start.go:93] Provisioning new machine with config: &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:27:46.529392    7934 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:27:46.537759    7934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:27:46.556342    7934 start.go:159] libmachine.API.Create for "multinode-552000" (driver="qemu2")
	I0812 03:27:46.556370    7934 client.go:168] LocalClient.Create starting
	I0812 03:27:46.556435    7934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:27:46.556464    7934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:27:46.556474    7934 main.go:141] libmachine: Parsing certificate...
	I0812 03:27:46.556523    7934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:27:46.556547    7934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:27:46.556556    7934 main.go:141] libmachine: Parsing certificate...
	I0812 03:27:46.556965    7934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:27:46.714544    7934 main.go:141] libmachine: Creating SSH key...
	I0812 03:27:46.758635    7934 main.go:141] libmachine: Creating Disk image...
	I0812 03:27:46.758640    7934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:27:46.758857    7934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:46.768054    7934 main.go:141] libmachine: STDOUT: 
	I0812 03:27:46.768068    7934 main.go:141] libmachine: STDERR: 
	I0812 03:27:46.768115    7934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2 +20000M
	I0812 03:27:46.775896    7934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:27:46.775915    7934 main.go:141] libmachine: STDERR: 
	I0812 03:27:46.775926    7934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:46.775930    7934 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:27:46.775941    7934 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:27:46.775974    7934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:de:81:53:95:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:46.777534    7934 main.go:141] libmachine: STDOUT: 
	I0812 03:27:46.777549    7934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:27:46.777568    7934 client.go:171] duration metric: took 221.196458ms to LocalClient.Create
	I0812 03:27:48.779737    7934 start.go:128] duration metric: took 2.250355875s to createHost
	I0812 03:27:48.779834    7934 start.go:83] releasing machines lock for "multinode-552000", held for 2.250513625s
	W0812 03:27:48.779901    7934 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:27:48.791284    7934 out.go:177] * Deleting "multinode-552000" in qemu2 ...
	W0812 03:27:48.822841    7934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:27:48.822877    7934 start.go:729] Will try again in 5 seconds ...
	I0812 03:27:53.825042    7934 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:27:53.825584    7934 start.go:364] duration metric: took 422.5µs to acquireMachinesLock for "multinode-552000"
	I0812 03:27:53.825716    7934 start.go:93] Provisioning new machine with config: &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:27:53.826038    7934 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:27:53.837554    7934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:27:53.889432    7934 start.go:159] libmachine.API.Create for "multinode-552000" (driver="qemu2")
	I0812 03:27:53.889482    7934 client.go:168] LocalClient.Create starting
	I0812 03:27:53.889605    7934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:27:53.889675    7934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:27:53.889694    7934 main.go:141] libmachine: Parsing certificate...
	I0812 03:27:53.889763    7934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:27:53.889810    7934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:27:53.889825    7934 main.go:141] libmachine: Parsing certificate...
	I0812 03:27:53.890355    7934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:27:54.054534    7934 main.go:141] libmachine: Creating SSH key...
	I0812 03:27:54.205298    7934 main.go:141] libmachine: Creating Disk image...
	I0812 03:27:54.205308    7934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:27:54.205522    7934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:54.215092    7934 main.go:141] libmachine: STDOUT: 
	I0812 03:27:54.215155    7934 main.go:141] libmachine: STDERR: 
	I0812 03:27:54.215200    7934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2 +20000M
	I0812 03:27:54.222960    7934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:27:54.222974    7934 main.go:141] libmachine: STDERR: 
	I0812 03:27:54.222983    7934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:54.222986    7934 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:27:54.222995    7934 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:27:54.223021    7934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:97:c5:f9:f7:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:27:54.224546    7934 main.go:141] libmachine: STDOUT: 
	I0812 03:27:54.224562    7934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:27:54.224575    7934 client.go:171] duration metric: took 335.091291ms to LocalClient.Create
	I0812 03:27:56.226791    7934 start.go:128] duration metric: took 2.400683708s to createHost
	I0812 03:27:56.226886    7934 start.go:83] releasing machines lock for "multinode-552000", held for 2.401290459s
	W0812 03:27:56.227216    7934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:27:56.235859    7934 out.go:177] 
	W0812 03:27:56.241937    7934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:27:56.241960    7934 out.go:239] * 
	* 
	W0812 03:27:56.244754    7934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:27:56.253730    7934 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-552000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (70.350917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.206041ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-552000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- rollout status deployment/busybox: exit status 1 (55.535541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.343125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.749292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.89575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.167834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.050166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.476333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.381542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.451ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.443917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.614459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.670458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.073542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.358917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.371042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.462709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (28.858166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-552000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.005ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.087375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr: exit status 83 (40.818625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:52.120851    8036 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:52.121029    8036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.121032    8036 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:52.121034    8036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.121186    8036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:52.121421    8036 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:52.121607    8036 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:52.126130    8036 out.go:177] * The control-plane node multinode-552000 host is not running: state=Stopped
	I0812 03:29:52.129137    8036 out.go:177]   To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-552000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.195042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-552000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-552000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.668542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-552000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-552000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-552000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (29.733958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-552000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-552000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-552000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-552000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (28.982334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr: exit status 7 (28.919875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-552000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:52.323249    8048 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:52.323481    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.323484    8048 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:52.323487    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.323639    8048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:52.323759    8048 out.go:298] Setting JSON to true
	I0812 03:29:52.323769    8048 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:52.323831    8048 notify.go:220] Checking for updates...
	I0812 03:29:52.323964    8048 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:52.323971    8048 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:52.324203    8048 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:52.324206    8048 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:52.324208    8048 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-552000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (28.929166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node stop m03: exit status 85 (45.180584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status: exit status 7 (29.0215ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (29.410708ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:52.456788    8056 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:52.456946    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.456949    8056 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:52.456951    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.457098    8056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:52.457217    8056 out.go:298] Setting JSON to false
	I0812 03:29:52.457227    8056 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:52.457282    8056 notify.go:220] Checking for updates...
	I0812 03:29:52.457424    8056 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:52.457429    8056 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:52.457610    8056 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:52.457614    8056 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:52.457616    8056 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (29.871834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (58.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.024625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:52.516055    8060 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:52.516606    8060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.516609    8060 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:52.516613    8060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.516791    8060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:52.517011    8060 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:52.517215    8060 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:52.520986    8060 out.go:177] 
	W0812 03:29:52.525003    8060 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0812 03:29:52.525010    8060 out.go:239] * 
	* 
	W0812 03:29:52.526967    8060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:29:52.530893    8060 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0812 03:29:52.516055    8060 out.go:291] Setting OutFile to fd 1 ...
I0812 03:29:52.516606    8060 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:29:52.516609    8060 out.go:304] Setting ErrFile to fd 2...
I0812 03:29:52.516613    8060 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 03:29:52.516791    8060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
I0812 03:29:52.517011    8060 mustload.go:65] Loading cluster: multinode-552000
I0812 03:29:52.517215    8060 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 03:29:52.520986    8060 out.go:177] 
W0812 03:29:52.525003    8060 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0812 03:29:52.525010    8060 out.go:239] * 
* 
W0812 03:29:52.526967    8060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0812 03:29:52.530893    8060 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (29.613958ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:52.563804    8062 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:52.563948    8062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.563952    8062 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:52.563954    8062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:52.564093    8062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:52.564217    8062 out.go:298] Setting JSON to false
	I0812 03:29:52.564227    8062 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:52.564293    8062 notify.go:220] Checking for updates...
	I0812 03:29:52.564423    8062 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:52.564429    8062 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:52.564639    8062 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:52.564643    8062 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:52.564645    8062 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (73.192292ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:53.303029    8064 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:53.303211    8064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:53.303219    8064 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:53.303225    8064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:53.303408    8064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:53.303552    8064 out.go:298] Setting JSON to false
	I0812 03:29:53.303565    8064 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:53.303606    8064 notify.go:220] Checking for updates...
	I0812 03:29:53.303837    8064 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:53.303845    8064 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:53.304112    8064 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:53.304117    8064 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:53.304120    8064 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.1225ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:55.499885    8068 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:55.500124    8068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:55.500129    8068 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:55.500132    8068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:55.500317    8068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:55.500477    8068 out.go:298] Setting JSON to false
	I0812 03:29:55.500492    8068 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:55.500539    8068 notify.go:220] Checking for updates...
	I0812 03:29:55.500811    8068 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:55.500820    8068 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:55.501116    8068 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:55.501121    8068 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:55.501124    8068 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (72.324041ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:29:56.915778    8070 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:29:56.915980    8070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:56.915985    8070 out.go:304] Setting ErrFile to fd 2...
	I0812 03:29:56.915988    8070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:29:56.916178    8070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:29:56.916330    8070 out.go:298] Setting JSON to false
	I0812 03:29:56.916344    8070 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:29:56.916383    8070 notify.go:220] Checking for updates...
	I0812 03:29:56.916603    8070 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:29:56.916614    8070 status.go:255] checking status of multinode-552000 ...
	I0812 03:29:56.916916    8070 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:29:56.916921    8070 status.go:343] host is not running, skipping remaining checks
	I0812 03:29:56.916924    8070 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (63.020708ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:00.953521    8137 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:00.953713    8137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:00.953719    8137 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:00.953723    8137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:00.953939    8137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:00.954124    8137 out.go:298] Setting JSON to false
	I0812 03:30:00.954137    8137 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:00.954183    8137 notify.go:220] Checking for updates...
	I0812 03:30:00.954430    8137 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:00.954441    8137 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:00.954743    8137 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:00.954748    8137 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:00.954751    8137 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (74.967209ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:07.472604    8363 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:07.472794    8363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:07.472798    8363 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:07.472801    8363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:07.472965    8363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:07.473136    8363 out.go:298] Setting JSON to false
	I0812 03:30:07.473148    8363 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:07.473189    8363 notify.go:220] Checking for updates...
	I0812 03:30:07.473411    8363 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:07.473421    8363 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:07.473685    8363 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:07.473690    8363 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:07.473693    8363 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (72.792375ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:14.235239    8367 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:14.235485    8367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:14.235490    8367 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:14.235493    8367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:14.235691    8367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:14.235862    8367 out.go:298] Setting JSON to false
	I0812 03:30:14.235876    8367 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:14.235932    8367 notify.go:220] Checking for updates...
	I0812 03:30:14.236157    8367 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:14.236164    8367 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:14.236484    8367 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:14.236489    8367 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:14.236492    8367 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (59.059291ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:21.699348    8369 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:21.699539    8369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:21.699543    8369 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:21.699547    8369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:21.699734    8369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:21.699909    8369 out.go:298] Setting JSON to false
	I0812 03:30:21.699923    8369 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:21.699959    8369 notify.go:220] Checking for updates...
	I0812 03:30:21.700192    8369 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:21.700201    8369 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:21.700464    8369 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:21.700469    8369 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:21.700472    8369 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (71.427583ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:37.697798    8376 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:37.698043    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:37.698047    8376 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:37.698051    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:37.698266    8376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:37.698420    8376 out.go:298] Setting JSON to false
	I0812 03:30:37.698433    8376 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:37.698471    8376 notify.go:220] Checking for updates...
	I0812 03:30:37.698715    8376 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:37.698723    8376 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:37.699001    8376 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:37.699005    8376 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:37.699008    8376 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr: exit status 7 (76.142ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:50.848322    8382 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:50.848549    8382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:50.848553    8382 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:50.848557    8382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:50.848736    8382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:50.848923    8382 out.go:298] Setting JSON to false
	I0812 03:30:50.848950    8382 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:50.848980    8382 notify.go:220] Checking for updates...
	I0812 03:30:50.849253    8382 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:50.849262    8382 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:50.849566    8382 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:50.849571    8382 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:50.849574    8382 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-552000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (33.320834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (58.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-552000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-552000: (3.306342667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225239791s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:54.279402    8406 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:54.279561    8406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:54.279566    8406 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:54.279569    8406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:54.279742    8406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:54.280927    8406 out.go:298] Setting JSON to false
	I0812 03:30:54.299926    8406 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5424,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:30:54.299987    8406 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:30:54.304215    8406 out.go:177] * [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:30:54.310895    8406 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:30:54.310971    8406 notify.go:220] Checking for updates...
	I0812 03:30:54.317908    8406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:30:54.320931    8406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:30:54.323919    8406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:30:54.326868    8406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:30:54.329900    8406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:30:54.333168    8406 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:54.333224    8406 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:30:54.337815    8406 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:30:54.352536    8406 start.go:297] selected driver: qemu2
	I0812 03:30:54.352544    8406 start.go:901] validating driver "qemu2" against &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:30:54.352614    8406 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:30:54.355176    8406 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:30:54.355232    8406 cni.go:84] Creating CNI manager for ""
	I0812 03:30:54.355237    8406 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 03:30:54.355289    8406 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:30:54.359265    8406 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:30:54.366906    8406 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0812 03:30:54.369758    8406 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:30:54.369777    8406 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:30:54.369787    8406 cache.go:56] Caching tarball of preloaded images
	I0812 03:30:54.369874    8406 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:30:54.369880    8406 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:30:54.369946    8406 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/multinode-552000/config.json ...
	I0812 03:30:54.370314    8406 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:30:54.370361    8406 start.go:364] duration metric: took 40.084µs to acquireMachinesLock for "multinode-552000"
	I0812 03:30:54.370372    8406 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:30:54.370378    8406 fix.go:54] fixHost starting: 
	I0812 03:30:54.370526    8406 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0812 03:30:54.370534    8406 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:30:54.378745    8406 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0812 03:30:54.382865    8406 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:30:54.382904    8406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:97:c5:f9:f7:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:30:54.385070    8406 main.go:141] libmachine: STDOUT: 
	I0812 03:30:54.385090    8406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:30:54.385119    8406 fix.go:56] duration metric: took 14.742125ms for fixHost
	I0812 03:30:54.385124    8406 start.go:83] releasing machines lock for "multinode-552000", held for 14.75875ms
	W0812 03:30:54.385131    8406 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:30:54.385172    8406 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:30:54.385178    8406 start.go:729] Will try again in 5 seconds ...
	I0812 03:30:59.387251    8406 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:30:59.387595    8406 start.go:364] duration metric: took 274.708µs to acquireMachinesLock for "multinode-552000"
	I0812 03:30:59.387736    8406 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:30:59.387756    8406 fix.go:54] fixHost starting: 
	I0812 03:30:59.388441    8406 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0812 03:30:59.388464    8406 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:30:59.396885    8406 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0812 03:30:59.400877    8406 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:30:59.401148    8406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:97:c5:f9:f7:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:30:59.409907    8406 main.go:141] libmachine: STDOUT: 
	I0812 03:30:59.409992    8406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:30:59.410067    8406 fix.go:56] duration metric: took 22.31275ms for fixHost
	I0812 03:30:59.410081    8406 start.go:83] releasing machines lock for "multinode-552000", held for 22.466209ms
	W0812 03:30:59.410271    8406 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:30:59.416909    8406 out.go:177] 
	W0812 03:30:59.420969    8406 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:30:59.421002    8406 out.go:239] * 
	* 
	W0812 03:30:59.423533    8406 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:30:59.431787    8406 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-552000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (33.354166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 node delete m03: exit status 83 (41.809792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-552000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (28.764417ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:30:59.619163    8420 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:30:59.619360    8420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:59.619363    8420 out.go:304] Setting ErrFile to fd 2...
	I0812 03:30:59.619365    8420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:30:59.619502    8420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:30:59.619615    8420 out.go:298] Setting JSON to false
	I0812 03:30:59.619631    8420 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:30:59.619670    8420 notify.go:220] Checking for updates...
	I0812 03:30:59.619828    8420 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:30:59.619834    8420 status.go:255] checking status of multinode-552000 ...
	I0812 03:30:59.620042    8420 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:30:59.620046    8420 status.go:343] host is not running, skipping remaining checks
	I0812 03:30:59.620048    8420 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (29.733958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-552000 stop: (3.192701s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status: exit status 7 (64.252792ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr: exit status 7 (32.509916ms)

                                                
                                                
-- stdout --
	multinode-552000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:31:02.939192    8444 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:31:02.939347    8444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:02.939350    8444 out.go:304] Setting ErrFile to fd 2...
	I0812 03:31:02.939352    8444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:02.939474    8444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:31:02.939606    8444 out.go:298] Setting JSON to false
	I0812 03:31:02.939616    8444 mustload.go:65] Loading cluster: multinode-552000
	I0812 03:31:02.939666    8444 notify.go:220] Checking for updates...
	I0812 03:31:02.939816    8444 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:31:02.939822    8444 status.go:255] checking status of multinode-552000 ...
	I0812 03:31:02.940031    8444 status.go:330] multinode-552000 host status = "Stopped" (err=<nil>)
	I0812 03:31:02.940035    8444 status.go:343] host is not running, skipping remaining checks
	I0812 03:31:02.940037    8444 status.go:257] multinode-552000 status: &{Name:multinode-552000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-552000 status --alsologtostderr": multinode-552000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (29.55125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183313042s)

                                                
                                                
-- stdout --
	* [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:31:02.998523    8448 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:31:02.998665    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:02.998668    8448 out.go:304] Setting ErrFile to fd 2...
	I0812 03:31:02.998670    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:02.998805    8448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:31:02.999779    8448 out.go:298] Setting JSON to false
	I0812 03:31:03.015567    8448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5433,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:31:03.015647    8448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:31:03.021203    8448 out.go:177] * [multinode-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:31:03.028200    8448 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:31:03.028254    8448 notify.go:220] Checking for updates...
	I0812 03:31:03.035087    8448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:31:03.038152    8448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:31:03.041149    8448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:31:03.044110    8448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:31:03.047119    8448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:31:03.050477    8448 config.go:182] Loaded profile config "multinode-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:31:03.050759    8448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:31:03.055114    8448 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:31:03.062165    8448 start.go:297] selected driver: qemu2
	I0812 03:31:03.062171    8448 start.go:901] validating driver "qemu2" against &{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:31:03.062236    8448 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:31:03.064546    8448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:31:03.064585    8448 cni.go:84] Creating CNI manager for ""
	I0812 03:31:03.064590    8448 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 03:31:03.064635    8448 start.go:340] cluster config:
	{Name:multinode-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-552000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:31:03.068247    8448 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:03.076100    8448 out.go:177] * Starting "multinode-552000" primary control-plane node in "multinode-552000" cluster
	I0812 03:31:03.078993    8448 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:31:03.079008    8448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:31:03.079015    8448 cache.go:56] Caching tarball of preloaded images
	I0812 03:31:03.079071    8448 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:31:03.079076    8448 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:31:03.079131    8448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/multinode-552000/config.json ...
	I0812 03:31:03.079560    8448 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:31:03.079591    8448 start.go:364] duration metric: took 25.291µs to acquireMachinesLock for "multinode-552000"
	I0812 03:31:03.079602    8448 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:31:03.079610    8448 fix.go:54] fixHost starting: 
	I0812 03:31:03.079727    8448 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0812 03:31:03.079735    8448 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:31:03.087981    8448 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0812 03:31:03.092062    8448 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:31:03.092099    8448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:97:c5:f9:f7:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:31:03.094163    8448 main.go:141] libmachine: STDOUT: 
	I0812 03:31:03.094183    8448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:31:03.094213    8448 fix.go:56] duration metric: took 14.604917ms for fixHost
	I0812 03:31:03.094219    8448 start.go:83] releasing machines lock for "multinode-552000", held for 14.622792ms
	W0812 03:31:03.094224    8448 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:31:03.094260    8448 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:31:03.094266    8448 start.go:729] Will try again in 5 seconds ...
	I0812 03:31:08.096364    8448 start.go:360] acquireMachinesLock for multinode-552000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:31:08.096727    8448 start.go:364] duration metric: took 256.292µs to acquireMachinesLock for "multinode-552000"
	I0812 03:31:08.096813    8448 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:31:08.096828    8448 fix.go:54] fixHost starting: 
	I0812 03:31:08.097302    8448 fix.go:112] recreateIfNeeded on multinode-552000: state=Stopped err=<nil>
	W0812 03:31:08.097317    8448 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:31:08.102927    8448 out.go:177] * Restarting existing qemu2 VM for "multinode-552000" ...
	I0812 03:31:08.106798    8448 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:31:08.107016    8448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:97:c5:f9:f7:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/multinode-552000/disk.qcow2
	I0812 03:31:08.116504    8448 main.go:141] libmachine: STDOUT: 
	I0812 03:31:08.116570    8448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:31:08.116649    8448 fix.go:56] duration metric: took 19.822ms for fixHost
	I0812 03:31:08.116672    8448 start.go:83] releasing machines lock for "multinode-552000", held for 19.904291ms
	W0812 03:31:08.116843    8448 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:31:08.125749    8448 out.go:177] 
	W0812 03:31:08.129861    8448 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:31:08.129883    8448 out.go:239] * 
	* 
	W0812 03:31:08.132407    8448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:31:08.140692    8448 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-552000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (68.513833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-552000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000-m01 --driver=qemu2 : exit status 80 (10.270018791s)

                                                
                                                
-- stdout --
	* [multinode-552000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000-m01" primary control-plane node in "multinode-552000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 : exit status 80 (10.345789708s)

                                                
                                                
-- stdout --
	* [multinode-552000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-552000-m02" primary control-plane node in "multinode-552000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-552000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-552000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-552000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-552000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-552000: exit status 83 (84.420125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-552000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-552000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-552000 -n multinode-552000: exit status 7 (30.324834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.85s)

                                                
                                    
x
+
TestPreload (10.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-577000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-577000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.0638805s)

                                                
                                                
-- stdout --
	* [test-preload-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-577000" primary control-plane node in "test-preload-577000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-577000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:31:29.220149    8514 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:31:29.220319    8514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:29.220322    8514 out.go:304] Setting ErrFile to fd 2...
	I0812 03:31:29.220325    8514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:31:29.220446    8514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:31:29.221397    8514 out.go:298] Setting JSON to false
	I0812 03:31:29.237171    8514 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5459,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:31:29.237264    8514 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:31:29.243845    8514 out.go:177] * [test-preload-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:31:29.251709    8514 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:31:29.251751    8514 notify.go:220] Checking for updates...
	I0812 03:31:29.258672    8514 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:31:29.261718    8514 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:31:29.264746    8514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:31:29.267734    8514 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:31:29.270689    8514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:31:29.274135    8514 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:31:29.274190    8514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:31:29.278551    8514 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:31:29.285637    8514 start.go:297] selected driver: qemu2
	I0812 03:31:29.285642    8514 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:31:29.285648    8514 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:31:29.287851    8514 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:31:29.290608    8514 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:31:29.293797    8514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:31:29.293811    8514 cni.go:84] Creating CNI manager for ""
	I0812 03:31:29.293817    8514 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:31:29.293821    8514 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:31:29.293852    8514 start.go:340] cluster config:
	{Name:test-preload-577000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:31:29.297440    8514 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.304657    8514 out.go:177] * Starting "test-preload-577000" primary control-plane node in "test-preload-577000" cluster
	I0812 03:31:29.308683    8514 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0812 03:31:29.308768    8514 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/test-preload-577000/config.json ...
	I0812 03:31:29.308785    8514 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/test-preload-577000/config.json: {Name:mke74d4d7953c373c3ba590ebf766ed72915a51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:31:29.308804    8514 cache.go:107] acquiring lock: {Name:mk21ce3047aface3b0ba5fdbb92052c04fda44f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308811    8514 cache.go:107] acquiring lock: {Name:mk3ce67963ce86ed344585bd6c0d2a481550e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308826    8514 cache.go:107] acquiring lock: {Name:mk5ae6209d877fb062f7d5ca4a1667122ef05039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308816    8514 cache.go:107] acquiring lock: {Name:mk7be9b831a437d374f8122586bed23cf73568ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308833    8514 cache.go:107] acquiring lock: {Name:mkcbe0bb8986d4c69adb2f5cf3162a9f6658c5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308839    8514 cache.go:107] acquiring lock: {Name:mk7f33f651d851a89f60f9f3b40192055189bd30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308850    8514 cache.go:107] acquiring lock: {Name:mka2ee4d31f4e91e2d37ee1cef2bfcf305c04f5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.308854    8514 cache.go:107] acquiring lock: {Name:mkb63db3e646d4560563ebe19dff300b021cebfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:31:29.309242    8514 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0812 03:31:29.309260    8514 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0812 03:31:29.309305    8514 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:31:29.309302    8514 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:31:29.309315    8514 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0812 03:31:29.309406    8514 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:31:29.309439    8514 start.go:360] acquireMachinesLock for test-preload-577000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:31:29.309469    8514 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 03:31:29.309485    8514 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0812 03:31:29.309477    8514 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "test-preload-577000"
	I0812 03:31:29.309514    8514 start.go:93] Provisioning new machine with config: &{Name:test-preload-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:31:29.309563    8514 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:31:29.315690    8514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:31:29.320068    8514 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:31:29.320162    8514 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0812 03:31:29.320659    8514 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0812 03:31:29.320745    8514 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:31:29.321586    8514 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 03:31:29.322327    8514 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0812 03:31:29.322364    8514 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:31:29.322849    8514 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0812 03:31:29.332924    8514 start.go:159] libmachine.API.Create for "test-preload-577000" (driver="qemu2")
	I0812 03:31:29.332948    8514 client.go:168] LocalClient.Create starting
	I0812 03:31:29.333068    8514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:31:29.333103    8514 main.go:141] libmachine: Decoding PEM data...
	I0812 03:31:29.333116    8514 main.go:141] libmachine: Parsing certificate...
	I0812 03:31:29.333163    8514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:31:29.333189    8514 main.go:141] libmachine: Decoding PEM data...
	I0812 03:31:29.333202    8514 main.go:141] libmachine: Parsing certificate...
	I0812 03:31:29.333639    8514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:31:29.508464    8514 main.go:141] libmachine: Creating SSH key...
	I0812 03:31:29.584725    8514 main.go:141] libmachine: Creating Disk image...
	I0812 03:31:29.584743    8514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:31:29.584976    8514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:29.594890    8514 main.go:141] libmachine: STDOUT: 
	I0812 03:31:29.594911    8514 main.go:141] libmachine: STDERR: 
	I0812 03:31:29.594961    8514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2 +20000M
	I0812 03:31:29.604147    8514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:31:29.604191    8514 main.go:141] libmachine: STDERR: 
	I0812 03:31:29.604202    8514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:29.604208    8514 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:31:29.604220    8514 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:31:29.604242    8514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:29:3c:7b:8b:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:29.606146    8514 main.go:141] libmachine: STDOUT: 
	I0812 03:31:29.606170    8514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:31:29.606187    8514 client.go:171] duration metric: took 273.23875ms to LocalClient.Create
	I0812 03:31:29.694491    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0812 03:31:29.725106    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0812 03:31:29.750324    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0812 03:31:29.750337    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0812 03:31:29.752985    8514 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0812 03:31:29.753010    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0812 03:31:29.784818    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0812 03:31:29.828855    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0812 03:31:29.846797    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0812 03:31:29.846825    8514 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 538.004625ms
	I0812 03:31:29.846849    8514 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0812 03:31:30.328430    8514 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0812 03:31:30.328511    8514 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0812 03:31:30.594204    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0812 03:31:30.594264    8514 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.28547s
	I0812 03:31:30.594292    8514 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0812 03:31:31.606382    8514 start.go:128] duration metric: took 2.2968325s to createHost
	I0812 03:31:31.606432    8514 start.go:83] releasing machines lock for "test-preload-577000", held for 2.296959125s
	W0812 03:31:31.606503    8514 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:31:31.618620    8514 out.go:177] * Deleting "test-preload-577000" in qemu2 ...
	W0812 03:31:31.650256    8514 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:31:31.650284    8514 start.go:729] Will try again in 5 seconds ...
	I0812 03:31:31.801351    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0812 03:31:31.801429    8514 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.492609583s
	I0812 03:31:31.801457    8514 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0812 03:31:31.854137    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0812 03:31:31.854177    8514 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.545385667s
	I0812 03:31:31.854199    8514 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0812 03:31:33.507655    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0812 03:31:33.507700    8514 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.198957416s
	I0812 03:31:33.507726    8514 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0812 03:31:34.219930    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0812 03:31:34.219972    8514 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.911251917s
	I0812 03:31:34.220015    8514 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0812 03:31:35.910270    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0812 03:31:35.910656    8514 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.60191425s
	I0812 03:31:35.910734    8514 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0812 03:31:36.650766    8514 start.go:360] acquireMachinesLock for test-preload-577000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:31:36.651207    8514 start.go:364] duration metric: took 357.125µs to acquireMachinesLock for "test-preload-577000"
	I0812 03:31:36.651334    8514 start.go:93] Provisioning new machine with config: &{Name:test-preload-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:31:36.651608    8514 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:31:36.656257    8514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:31:36.706612    8514 start.go:159] libmachine.API.Create for "test-preload-577000" (driver="qemu2")
	I0812 03:31:36.706669    8514 client.go:168] LocalClient.Create starting
	I0812 03:31:36.706791    8514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:31:36.706861    8514 main.go:141] libmachine: Decoding PEM data...
	I0812 03:31:36.706878    8514 main.go:141] libmachine: Parsing certificate...
	I0812 03:31:36.706939    8514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:31:36.706983    8514 main.go:141] libmachine: Decoding PEM data...
	I0812 03:31:36.706994    8514 main.go:141] libmachine: Parsing certificate...
	I0812 03:31:36.707476    8514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:31:36.871899    8514 main.go:141] libmachine: Creating SSH key...
	I0812 03:31:37.175326    8514 main.go:141] libmachine: Creating Disk image...
	I0812 03:31:37.175342    8514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:31:37.175594    8514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:37.185765    8514 main.go:141] libmachine: STDOUT: 
	I0812 03:31:37.185796    8514 main.go:141] libmachine: STDERR: 
	I0812 03:31:37.185841    8514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2 +20000M
	I0812 03:31:37.194040    8514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:31:37.194066    8514 main.go:141] libmachine: STDERR: 
	I0812 03:31:37.194098    8514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:37.194106    8514 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:31:37.194113    8514 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:31:37.194150    8514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:58:c0:07:15:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/test-preload-577000/disk.qcow2
	I0812 03:31:37.195935    8514 main.go:141] libmachine: STDOUT: 
	I0812 03:31:37.195950    8514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:31:37.195964    8514 client.go:171] duration metric: took 489.298ms to LocalClient.Create
	I0812 03:31:38.813906    8514 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0812 03:31:38.813977    8514 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.505291917s
	I0812 03:31:38.814019    8514 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0812 03:31:38.814105    8514 cache.go:87] Successfully saved all images to host disk.
	I0812 03:31:39.198245    8514 start.go:128] duration metric: took 2.546621334s to createHost
	I0812 03:31:39.198306    8514 start.go:83] releasing machines lock for "test-preload-577000", held for 2.547110333s
	W0812 03:31:39.198604    8514 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:31:39.217224    8514 out.go:177] 
	W0812 03:31:39.222127    8514 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:31:39.222150    8514 out.go:239] * 
	* 
	W0812 03:31:39.225027    8514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:31:39.242107    8514 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-577000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-12 03:31:39.259776 -0700 PDT m=+743.452654543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-577000 -n test-preload-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-577000 -n test-preload-577000: exit status 7 (68.67ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-577000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-577000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-577000
--- FAIL: TestPreload (10.21s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-565000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-565000 --memory=2048 --driver=qemu2 : exit status 80 (9.784839125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-565000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-565000" primary control-plane node in "scheduled-stop-565000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-565000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-565000" primary control-plane node in "scheduled-stop-565000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-12 03:31:49.188943 -0700 PDT m=+753.381982876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-565000 -n scheduled-stop-565000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-565000 -n scheduled-stop-565000: exit status 7 (69.9555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-565000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-565000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (12.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1335623040 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1335623040 version: (1.058567666s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-856000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-856000 --memory=2600 --driver=qemu2 : exit status 80 (9.814992625s)

                                                
                                                
-- stdout --
	* [skaffold-856000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-856000" primary control-plane node in "skaffold-856000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-856000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-856000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-856000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-856000" primary control-plane node in "skaffold-856000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-856000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-856000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-12 03:32:01.596075 -0700 PDT m=+765.789315668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-856000 -n skaffold-856000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-856000 -n skaffold-856000: exit status 7 (60.210375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-856000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-856000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-856000
--- FAIL: TestSkaffold (12.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3368971709 start -p running-upgrade-969000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3368971709 start -p running-upgrade-969000 --memory=2200 --vm-driver=qemu2 : (51.447364208s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-969000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-969000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.348001417s)

                                                
                                                
-- stdout --
	* [running-upgrade-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-969000" primary control-plane node in "running-upgrade-969000" cluster
	* Updating the running qemu2 "running-upgrade-969000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:33:34.467922    8914 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:33:34.468068    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:33:34.468071    8914 out.go:304] Setting ErrFile to fd 2...
	I0812 03:33:34.468073    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:33:34.468202    8914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:33:34.469119    8914 out.go:298] Setting JSON to false
	I0812 03:33:34.485468    8914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5584,"bootTime":1723453230,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:33:34.485552    8914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:33:34.488548    8914 out.go:177] * [running-upgrade-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:33:34.496688    8914 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:33:34.496770    8914 notify.go:220] Checking for updates...
	I0812 03:33:34.504668    8914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:33:34.508551    8914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:33:34.511614    8914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:33:34.514784    8914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:33:34.517572    8914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:33:34.520867    8914 config.go:182] Loaded profile config "running-upgrade-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:33:34.523702    8914 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 03:33:34.526659    8914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:33:34.530643    8914 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:33:34.536625    8914 start.go:297] selected driver: qemu2
	I0812 03:33:34.536632    8914 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:33:34.536677    8914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:33:34.539060    8914 cni.go:84] Creating CNI manager for ""
	I0812 03:33:34.539077    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:33:34.539119    8914 start.go:340] cluster config:
	{Name:running-upgrade-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:33:34.539168    8914 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:33:34.546718    8914 out.go:177] * Starting "running-upgrade-969000" primary control-plane node in "running-upgrade-969000" cluster
	I0812 03:33:34.550667    8914 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:33:34.550685    8914 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0812 03:33:34.550700    8914 cache.go:56] Caching tarball of preloaded images
	I0812 03:33:34.550758    8914 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:33:34.550763    8914 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0812 03:33:34.550824    8914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/config.json ...
	I0812 03:33:34.551278    8914 start.go:360] acquireMachinesLock for running-upgrade-969000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:33:34.551309    8914 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "running-upgrade-969000"
	I0812 03:33:34.551318    8914 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:33:34.551323    8914 fix.go:54] fixHost starting: 
	I0812 03:33:34.551857    8914 fix.go:112] recreateIfNeeded on running-upgrade-969000: state=Running err=<nil>
	W0812 03:33:34.551864    8914 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:33:34.555729    8914 out.go:177] * Updating the running qemu2 "running-upgrade-969000" VM ...
	I0812 03:33:34.559593    8914 machine.go:94] provisionDockerMachine start ...
	I0812 03:33:34.559631    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.559823    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.559828    8914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 03:33:34.612669    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-969000
	
	I0812 03:33:34.612684    8914 buildroot.go:166] provisioning hostname "running-upgrade-969000"
	I0812 03:33:34.612738    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.612851    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.612856    8914 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-969000 && echo "running-upgrade-969000" | sudo tee /etc/hostname
	I0812 03:33:34.665594    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-969000
	
	I0812 03:33:34.665652    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.665770    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.665778    8914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-969000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-969000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-969000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 03:33:34.716111    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 03:33:34.716123    8914 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19409-6342/.minikube CaCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19409-6342/.minikube}
	I0812 03:33:34.716136    8914 buildroot.go:174] setting up certificates
	I0812 03:33:34.716154    8914 provision.go:84] configureAuth start
	I0812 03:33:34.716159    8914 provision.go:143] copyHostCerts
	I0812 03:33:34.716248    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem, removing ...
	I0812 03:33:34.716254    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem
	I0812 03:33:34.716380    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem (1082 bytes)
	I0812 03:33:34.716543    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem, removing ...
	I0812 03:33:34.716547    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem
	I0812 03:33:34.716602    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem (1123 bytes)
	I0812 03:33:34.716702    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem, removing ...
	I0812 03:33:34.716705    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem
	I0812 03:33:34.716750    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem (1675 bytes)
	I0812 03:33:34.716830    8914 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-969000 san=[127.0.0.1 localhost minikube running-upgrade-969000]
	I0812 03:33:34.800097    8914 provision.go:177] copyRemoteCerts
	I0812 03:33:34.800131    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 03:33:34.800137    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:33:34.828923    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 03:33:34.835534    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 03:33:34.843215    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0812 03:33:34.849749    8914 provision.go:87] duration metric: took 133.58525ms to configureAuth
	I0812 03:33:34.849758    8914 buildroot.go:189] setting minikube options for container-runtime
	I0812 03:33:34.849879    8914 config.go:182] Loaded profile config "running-upgrade-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:33:34.849917    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.850011    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.850015    8914 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 03:33:34.900388    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0812 03:33:34.900399    8914 buildroot.go:70] root file system type: tmpfs
	I0812 03:33:34.900463    8914 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 03:33:34.900509    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.900626    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.900659    8914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 03:33:34.956260    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 03:33:34.956316    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:34.956429    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:34.956437    8914 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 03:33:35.008744    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 03:33:35.008753    8914 machine.go:97] duration metric: took 449.14ms to provisionDockerMachine
	I0812 03:33:35.008762    8914 start.go:293] postStartSetup for "running-upgrade-969000" (driver="qemu2")
	I0812 03:33:35.008768    8914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 03:33:35.008809    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 03:33:35.008817    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:33:35.038125    8914 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 03:33:35.039477    8914 info.go:137] Remote host: Buildroot 2021.02.12
	I0812 03:33:35.039485    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/addons for local assets ...
	I0812 03:33:35.039561    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/files for local assets ...
	I0812 03:33:35.039675    8914 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem -> 68412.pem in /etc/ssl/certs
	I0812 03:33:35.039798    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 03:33:35.042865    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:33:35.049724    8914 start.go:296] duration metric: took 40.955875ms for postStartSetup
	I0812 03:33:35.049737    8914 fix.go:56] duration metric: took 498.39925ms for fixHost
	I0812 03:33:35.049769    8914 main.go:141] libmachine: Using SSH client type: native
	I0812 03:33:35.049869    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d0aa10] 0x102d0d270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0812 03:33:35.049876    8914 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 03:33:35.100106    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723458815.133664971
	
	I0812 03:33:35.100117    8914 fix.go:216] guest clock: 1723458815.133664971
	I0812 03:33:35.100121    8914 fix.go:229] Guest: 2024-08-12 03:33:35.133664971 -0700 PDT Remote: 2024-08-12 03:33:35.049739 -0700 PDT m=+0.600766043 (delta=83.925971ms)
	I0812 03:33:35.100138    8914 fix.go:200] guest clock delta is within tolerance: 83.925971ms
	I0812 03:33:35.100141    8914 start.go:83] releasing machines lock for "running-upgrade-969000", held for 548.809542ms
	I0812 03:33:35.100198    8914 ssh_runner.go:195] Run: cat /version.json
	I0812 03:33:35.100208    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:33:35.100198    8914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 03:33:35.100244    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	W0812 03:33:35.100757    8914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51336->127.0.0.1:51225: write: broken pipe
	I0812 03:33:35.100775    8914 retry.go:31] will retry after 258.287738ms: ssh: handshake failed: write tcp 127.0.0.1:51336->127.0.0.1:51225: write: broken pipe
	W0812 03:33:35.125155    8914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0812 03:33:35.125198    8914 ssh_runner.go:195] Run: systemctl --version
	I0812 03:33:35.127059    8914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 03:33:35.128731    8914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 03:33:35.128756    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0812 03:33:35.131675    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0812 03:33:35.136277    8914 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 03:33:35.136284    8914 start.go:495] detecting cgroup driver to use...
	I0812 03:33:35.136400    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:33:35.141595    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0812 03:33:35.144524    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0812 03:33:35.147942    8914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0812 03:33:35.147967    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0812 03:33:35.151445    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:33:35.155383    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0812 03:33:35.158377    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:33:35.161864    8914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 03:33:35.165439    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0812 03:33:35.168413    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0812 03:33:35.171706    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0812 03:33:35.175207    8914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 03:33:35.177674    8914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 03:33:35.180245    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:35.264263    8914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0812 03:33:35.275742    8914 start.go:495] detecting cgroup driver to use...
	I0812 03:33:35.275822    8914 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0812 03:33:35.281021    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:33:35.286274    8914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 03:33:35.292408    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:33:35.297632    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 03:33:35.302082    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:33:35.307359    8914 ssh_runner.go:195] Run: which cri-dockerd
	I0812 03:33:35.308627    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0812 03:33:35.311868    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0812 03:33:35.316992    8914 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0812 03:33:35.406159    8914 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0812 03:33:35.499909    8914 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0812 03:33:35.499973    8914 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0812 03:33:35.506538    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:35.589696    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:33:38.649272    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.059479417s)
	I0812 03:33:38.649336    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0812 03:33:38.653714    8914 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0812 03:33:38.659749    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:33:38.664432    8914 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0812 03:33:38.757098    8914 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0812 03:33:38.812566    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:38.882886    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0812 03:33:38.888410    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:33:38.893492    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:38.977572    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0812 03:33:39.018415    8914 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0812 03:33:39.018489    8914 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0812 03:33:39.021987    8914 start.go:563] Will wait 60s for crictl version
	I0812 03:33:39.022039    8914 ssh_runner.go:195] Run: which crictl
	I0812 03:33:39.023173    8914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 03:33:39.034660    8914 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0812 03:33:39.034720    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:33:39.046962    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:33:39.066305    8914 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0812 03:33:39.066365    8914 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0812 03:33:39.067842    8914 kubeadm.go:883] updating cluster {Name:running-upgrade-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0812 03:33:39.067885    8914 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:33:39.067921    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:33:39.078641    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:33:39.078650    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:33:39.078700    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:33:39.081846    8914 ssh_runner.go:195] Run: which lz4
	I0812 03:33:39.083175    8914 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 03:33:39.084510    8914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 03:33:39.084522    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0812 03:33:40.045806    8914 docker.go:649] duration metric: took 962.638708ms to copy over tarball
	I0812 03:33:40.045861    8914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 03:33:41.171173    8914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.125278834s)
	I0812 03:33:41.171186    8914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 03:33:41.187000    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:33:41.190023    8914 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0812 03:33:41.195429    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:41.275440    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:33:42.646997    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.371518792s)
	I0812 03:33:42.647095    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:33:42.661100    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:33:42.661108    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:33:42.661113    8914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 03:33:42.666047    8914 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:33:42.668054    8914 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:33:42.670161    8914 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:33:42.670199    8914 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:33:42.672650    8914 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:33:42.672715    8914 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:33:42.673804    8914 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:33:42.674798    8914 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:33:42.675830    8914 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:33:42.676069    8914 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:33:42.677190    8914 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0812 03:33:42.677259    8914 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:33:42.677988    8914 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:33:42.678263    8914 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:33:42.679142    8914 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0812 03:33:42.679781    8914 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:33:43.090465    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:33:43.104477    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:33:43.114842    8914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0812 03:33:43.114867    8914 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:33:43.114924    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:33:43.117769    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:33:43.125548    8914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0812 03:33:43.125571    8914 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:33:43.125623    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:33:43.130097    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0812 03:33:43.134442    8914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0812 03:33:43.134465    8914 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:33:43.134524    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:33:43.136933    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0812 03:33:43.140685    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0812 03:33:43.143835    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0812 03:33:43.146350    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0812 03:33:43.155339    8914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0812 03:33:43.155357    8914 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0812 03:33:43.155404    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0812 03:33:43.165136    8914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0812 03:33:43.165156    8914 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:33:43.165211    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0812 03:33:43.166948    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0812 03:33:43.167057    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0812 03:33:43.181997    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:33:43.182024    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0812 03:33:43.182067    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0812 03:33:43.182080    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0812 03:33:43.182112    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:33:43.191487    8914 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0812 03:33:43.191500    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0812 03:33:43.200785    8914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0812 03:33:43.200795    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0812 03:33:43.200809    8914 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:33:43.200821    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0812 03:33:43.200854    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0812 03:33:43.205638    8914 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0812 03:33:43.205779    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:33:43.238559    8914 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0812 03:33:43.238597    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0812 03:33:43.248975    8914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0812 03:33:43.249003    8914 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:33:43.249055    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:33:43.274816    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0812 03:33:43.274937    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:33:43.288120    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0812 03:33:43.288149    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0812 03:33:43.384676    8914 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:33:43.384692    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0812 03:33:43.416242    8914 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0812 03:33:43.416353    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:33:43.519659    8914 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0812 03:33:43.519697    8914 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0812 03:33:43.519723    8914 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:33:43.519796    8914 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:33:43.595857    8914 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:33:43.595874    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0812 03:33:43.608359    8914 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0812 03:33:43.608492    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:33:43.727561    8914 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0812 03:33:43.727598    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0812 03:33:43.727626    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0812 03:33:43.765663    8914 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:33:43.765680    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0812 03:33:44.042473    8914 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0812 03:33:44.042516    8914 cache_images.go:92] duration metric: took 1.381377792s to LoadCachedImages
	W0812 03:33:44.042557    8914 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0812 03:33:44.042564    8914 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0812 03:33:44.042622    8914 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-969000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 03:33:44.042683    8914 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0812 03:33:44.070151    8914 cni.go:84] Creating CNI manager for ""
	I0812 03:33:44.070160    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:33:44.070167    8914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 03:33:44.070189    8914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-969000 NodeName:running-upgrade-969000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 03:33:44.070260    8914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-969000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 03:33:44.070323    8914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0812 03:33:44.073579    8914 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 03:33:44.073604    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 03:33:44.076379    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0812 03:33:44.086526    8914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 03:33:44.091493    8914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0812 03:33:44.096537    8914 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0812 03:33:44.098143    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:33:44.171744    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:33:44.176504    8914 certs.go:68] Setting up /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000 for IP: 10.0.2.15
	I0812 03:33:44.176525    8914 certs.go:194] generating shared ca certs ...
	I0812 03:33:44.176535    8914 certs.go:226] acquiring lock for ca certs: {Name:mk040c6fb5b98a0bc56f55d23979ed8d77242cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:33:44.176774    8914 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key
	I0812 03:33:44.176825    8914 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key
	I0812 03:33:44.176830    8914 certs.go:256] generating profile certs ...
	I0812 03:33:44.176885    8914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.key
	I0812 03:33:44.176898    8914 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key.60519158
	I0812 03:33:44.176909    8914 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt.60519158 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0812 03:33:44.392224    8914 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt.60519158 ...
	I0812 03:33:44.392236    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt.60519158: {Name:mkadbcfe2d5f28348899438a8e1b63b1c519288c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:33:44.392571    8914 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key.60519158 ...
	I0812 03:33:44.392576    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key.60519158: {Name:mkcd310efbecb5793240d662db2e98fd5819201a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:33:44.392706    8914 certs.go:381] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt.60519158 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt
	I0812 03:33:44.392898    8914 certs.go:385] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key.60519158 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key
	I0812 03:33:44.393090    8914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/proxy-client.key
	I0812 03:33:44.393229    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem (1338 bytes)
	W0812 03:33:44.393258    8914 certs.go:480] ignoring /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841_empty.pem, impossibly tiny 0 bytes
	I0812 03:33:44.393265    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 03:33:44.393287    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem (1082 bytes)
	I0812 03:33:44.393308    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem (1123 bytes)
	I0812 03:33:44.393326    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem (1675 bytes)
	I0812 03:33:44.393364    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:33:44.393711    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 03:33:44.401307    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 03:33:44.408525    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 03:33:44.415881    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 03:33:44.423329    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0812 03:33:44.430302    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 03:33:44.437229    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 03:33:44.444142    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 03:33:44.451785    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /usr/share/ca-certificates/68412.pem (1708 bytes)
	I0812 03:33:44.458741    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 03:33:44.465465    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem --> /usr/share/ca-certificates/6841.pem (1338 bytes)
	I0812 03:33:44.472557    8914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 03:33:44.477263    8914 ssh_runner.go:195] Run: openssl version
	I0812 03:33:44.479138    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68412.pem && ln -fs /usr/share/ca-certificates/68412.pem /etc/ssl/certs/68412.pem"
	I0812 03:33:44.482156    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68412.pem
	I0812 03:33:44.483598    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:20 /usr/share/ca-certificates/68412.pem
	I0812 03:33:44.483616    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68412.pem
	I0812 03:33:44.485504    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68412.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 03:33:44.488477    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 03:33:44.491900    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:33:44.493348    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:33:44.493365    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:33:44.495232    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 03:33:44.498061    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6841.pem && ln -fs /usr/share/ca-certificates/6841.pem /etc/ssl/certs/6841.pem"
	I0812 03:33:44.501019    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6841.pem
	I0812 03:33:44.502557    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:20 /usr/share/ca-certificates/6841.pem
	I0812 03:33:44.502583    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6841.pem
	I0812 03:33:44.504345    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6841.pem /etc/ssl/certs/51391683.0"
	I0812 03:33:44.507740    8914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 03:33:44.509272    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 03:33:44.511179    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 03:33:44.513119    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 03:33:44.515121    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 03:33:44.517235    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 03:33:44.519107    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 03:33:44.521063    8914 kubeadm.go:392] StartCluster: {Name:running-upgrade-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:33:44.521473    8914 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:33:44.535850    8914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 03:33:44.539557    8914 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 03:33:44.539564    8914 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 03:33:44.539590    8914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 03:33:44.542451    8914 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:33:44.542484    8914 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-969000" does not appear in /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:33:44.542498    8914 kubeconfig.go:62] /Users/jenkins/minikube-integration/19409-6342/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-969000" cluster setting kubeconfig missing "running-upgrade-969000" context setting]
	I0812 03:33:44.542667    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:33:44.548445    8914 kapi.go:59] client config for running-upgrade-969000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a04e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:33:44.549251    8914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 03:33:44.552642    8914 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-969000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0812 03:33:44.552648    8914 kubeadm.go:1160] stopping kube-system containers ...
	I0812 03:33:44.552687    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:33:44.564188    8914 docker.go:483] Stopping containers: [497339253ef3 9c6cf45734d9 70d652942af1 aa5c5b8f7f1e 2c7290b9875e 533ac025a3aa efb0ca88de2d 228405b1a307 dbf0e437f9ea a84eef3c085a 6e79a52effc1 c40c5b140eca 2b7bb5a0a52f c2fdfc01743b]
	I0812 03:33:44.564255    8914 ssh_runner.go:195] Run: docker stop 497339253ef3 9c6cf45734d9 70d652942af1 aa5c5b8f7f1e 2c7290b9875e 533ac025a3aa efb0ca88de2d 228405b1a307 dbf0e437f9ea a84eef3c085a 6e79a52effc1 c40c5b140eca 2b7bb5a0a52f c2fdfc01743b
	I0812 03:33:44.622905    8914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 03:33:44.717921    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:33:44.722169    8914 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 12 10:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 12 10:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 12 10:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 12 10:33 /etc/kubernetes/scheduler.conf
	
	I0812 03:33:44.722207    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf
	I0812 03:33:44.725813    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:33:44.725844    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:33:44.729348    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf
	I0812 03:33:44.732636    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:33:44.732666    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:33:44.736295    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf
	I0812 03:33:44.739563    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:33:44.739586    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:33:44.742545    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf
	I0812 03:33:44.745115    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:33:44.745138    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:33:44.748128    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:33:44.751271    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:33:44.772388    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:33:45.345686    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:33:45.537083    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:33:45.558589    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:33:45.581448    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:33:45.581539    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:33:46.083696    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:33:46.583587    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:33:46.587889    8914 api_server.go:72] duration metric: took 1.006434834s to wait for apiserver process to appear ...
	I0812 03:33:46.587898    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:33:46.587906    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:33:51.590057    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:33:51.590105    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:33:56.590465    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:33:56.590562    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:01.591391    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:01.591434    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:06.592339    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:06.592402    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:11.593640    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:11.593691    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:16.594617    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:16.594725    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:21.596563    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:21.596612    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:26.598702    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:26.598763    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:31.601257    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:31.601342    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:36.602268    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:36.602340    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:41.604884    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:41.604930    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:46.607093    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:46.607392    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:34:46.634546    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:34:46.634696    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:34:46.652051    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:34:46.652136    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:34:46.665942    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:34:46.666018    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:34:46.677208    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:34:46.677282    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:34:46.691073    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:34:46.691145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:34:46.701576    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:34:46.701640    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:34:46.716332    8914 logs.go:276] 0 containers: []
	W0812 03:34:46.716345    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:34:46.716403    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:34:46.726551    8914 logs.go:276] 0 containers: []
	W0812 03:34:46.726561    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:34:46.726569    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:34:46.726574    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:34:46.731299    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:34:46.731309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:34:46.757271    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:34:46.757282    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:34:46.772315    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:34:46.772325    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:34:46.796645    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:34:46.796654    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:34:46.833067    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:34:46.833077    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:34:46.846786    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:34:46.846799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:34:46.858207    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:34:46.858219    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:34:46.928693    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:34:46.928718    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:34:46.940593    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:34:46.940605    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:34:46.956362    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:34:46.956375    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:34:46.970186    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:34:46.970198    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:34:46.983979    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:34:46.983993    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:34:47.000805    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:34:47.000816    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:34:47.013644    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:34:47.013656    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:34:49.527661    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:34:54.530073    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:34:54.530481    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:34:54.570244    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:34:54.570372    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:34:54.592530    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:34:54.592628    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:34:54.613688    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:34:54.613765    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:34:54.626111    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:34:54.626183    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:34:54.636679    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:34:54.636747    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:34:54.647905    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:34:54.647977    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:34:54.658234    8914 logs.go:276] 0 containers: []
	W0812 03:34:54.658246    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:34:54.658306    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:34:54.672768    8914 logs.go:276] 0 containers: []
	W0812 03:34:54.672781    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:34:54.672787    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:34:54.672793    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:34:54.708270    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:34:54.708285    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:34:54.724214    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:34:54.724225    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:34:54.743169    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:34:54.743182    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:34:54.769436    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:34:54.769446    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:34:54.806720    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:34:54.806729    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:34:54.811159    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:34:54.811166    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:34:54.826144    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:34:54.826155    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:34:54.838024    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:34:54.838038    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:34:54.849645    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:34:54.849655    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:34:54.873854    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:34:54.873863    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:34:54.885077    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:34:54.885087    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:34:54.899139    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:34:54.899153    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:34:54.913403    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:34:54.913415    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:34:54.925092    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:34:54.925107    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:34:57.439106    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:02.441877    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:02.442261    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:02.481324    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:02.481450    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:02.500748    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:02.500864    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:02.515430    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:02.515510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:02.527494    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:02.527572    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:02.543776    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:02.543840    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:02.555020    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:02.555080    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:02.565692    8914 logs.go:276] 0 containers: []
	W0812 03:35:02.565703    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:02.565764    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:02.580674    8914 logs.go:276] 0 containers: []
	W0812 03:35:02.580686    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:02.580695    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:02.580701    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:02.598969    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:02.598983    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:02.634996    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:02.635006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:02.649729    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:02.649742    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:02.666943    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:02.666956    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:02.678905    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:02.678916    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:02.717350    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:02.717362    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:02.742088    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:02.742101    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:02.758315    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:02.758329    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:02.772128    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:02.772142    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:02.783966    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:02.783980    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:02.788435    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:02.788443    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:02.806889    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:02.806902    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:02.833135    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:02.833142    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:02.848657    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:02.848668    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:05.361907    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:10.364495    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:10.364772    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:10.386721    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:10.386852    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:10.402579    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:10.402652    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:10.415566    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:10.415632    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:10.426837    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:10.426916    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:10.437393    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:10.437465    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:10.447747    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:10.447803    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:10.468320    8914 logs.go:276] 0 containers: []
	W0812 03:35:10.468335    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:10.468393    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:10.478823    8914 logs.go:276] 0 containers: []
	W0812 03:35:10.478834    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:10.478840    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:10.478845    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:10.493937    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:10.493946    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:10.510552    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:10.510569    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:10.525154    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:10.525165    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:10.536714    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:10.536730    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:10.551868    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:10.551880    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:10.570098    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:10.570111    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:10.575000    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:10.575006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:10.593961    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:10.593975    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:10.605461    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:10.605476    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:10.619529    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:10.619544    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:10.631216    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:10.631229    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:10.667640    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:10.667653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:10.692384    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:10.692398    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:10.719001    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:10.719013    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:13.254065    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:18.256830    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:18.257265    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:18.299006    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:18.299143    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:18.321445    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:18.321548    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:18.336668    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:18.336746    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:18.349344    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:18.349416    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:18.360207    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:18.360278    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:18.371244    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:18.371315    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:18.385999    8914 logs.go:276] 0 containers: []
	W0812 03:35:18.386009    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:18.386063    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:18.396694    8914 logs.go:276] 0 containers: []
	W0812 03:35:18.396707    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:18.396716    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:18.396722    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:18.413694    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:18.413710    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:18.425795    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:18.425809    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:18.439931    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:18.439945    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:18.451178    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:18.451189    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:18.463982    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:18.463996    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:18.499928    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:18.499943    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:18.524115    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:18.524124    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:18.535670    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:18.535682    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:18.540027    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:18.540033    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:18.557403    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:18.557412    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:18.582464    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:18.582477    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:18.596915    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:18.596928    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:18.611647    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:18.611659    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:18.625071    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:18.625083    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:21.163432    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:26.166241    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:26.166614    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:26.205312    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:26.205457    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:26.226731    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:26.226838    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:26.241886    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:26.241958    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:26.254641    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:26.254714    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:26.265860    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:26.265938    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:26.276892    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:26.276964    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:26.287436    8914 logs.go:276] 0 containers: []
	W0812 03:35:26.287446    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:26.287497    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:26.297885    8914 logs.go:276] 0 containers: []
	W0812 03:35:26.297898    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:26.297905    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:26.297911    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:26.311015    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:26.311029    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:26.350726    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:26.350735    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:26.365933    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:26.365946    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:26.390248    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:26.390255    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:26.415810    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:26.415819    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:26.427954    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:26.427963    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:26.442371    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:26.442384    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:26.456956    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:26.456966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:26.476713    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:26.476723    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:26.488838    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:26.488852    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:26.493460    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:26.493467    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:26.528346    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:26.528360    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:26.542600    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:26.542613    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:26.554508    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:26.554517    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:29.069451    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:34.072171    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:34.072542    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:34.108815    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:34.108933    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:34.127903    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:34.127984    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:34.141497    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:34.141559    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:34.154760    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:34.154825    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:34.165785    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:34.165858    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:34.176667    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:34.176737    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:34.188962    8914 logs.go:276] 0 containers: []
	W0812 03:35:34.188974    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:34.189029    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:34.199825    8914 logs.go:276] 0 containers: []
	W0812 03:35:34.199839    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:34.199846    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:34.199851    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:34.214029    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:34.214042    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:34.234633    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:34.234647    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:34.251857    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:34.251867    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:34.276512    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:34.276522    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:34.311783    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:34.311798    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:34.326767    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:34.326778    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:34.339175    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:34.339189    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:34.363643    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:34.363652    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:34.375273    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:34.375285    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:34.379728    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:34.379736    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:34.393886    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:34.393897    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:34.408630    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:34.408640    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:34.420352    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:34.420365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:34.431616    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:34.431629    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:36.971849    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:41.974011    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:41.974385    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:42.009404    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:42.009540    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:42.029068    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:42.029146    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:42.043888    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:42.043954    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:42.060432    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:42.060499    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:42.072263    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:42.072340    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:42.084688    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:42.084751    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:42.097244    8914 logs.go:276] 0 containers: []
	W0812 03:35:42.097255    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:42.097291    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:42.108583    8914 logs.go:276] 0 containers: []
	W0812 03:35:42.108595    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:42.108602    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:42.108608    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:42.149120    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:42.149130    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:42.188301    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:42.188312    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:42.210069    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:42.210080    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:42.235648    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:42.235658    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:42.247284    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:42.247292    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:42.251598    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:42.251605    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:42.280635    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:42.280646    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:42.294391    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:42.294399    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:42.312233    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:42.312244    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:42.324845    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:42.324861    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:42.338180    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:42.338194    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:42.357598    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:42.357615    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:42.374956    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:42.374973    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:42.390569    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:42.390590    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:44.910762    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:49.913071    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:49.913484    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:49.961084    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:49.961171    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:49.981783    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:49.981855    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:49.996979    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:49.997062    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:50.010149    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:50.010221    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:50.022535    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:50.022608    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:50.035145    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:50.035219    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:50.049073    8914 logs.go:276] 0 containers: []
	W0812 03:35:50.049087    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:50.049152    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:50.061237    8914 logs.go:276] 0 containers: []
	W0812 03:35:50.061250    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:50.061260    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:50.061266    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:50.080390    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:50.080402    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:50.107337    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:50.107359    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:50.127847    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:50.127865    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:50.166919    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:50.166932    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:50.178673    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:50.178683    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:50.193022    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:50.193037    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:50.205115    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:50.205127    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:50.223183    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:50.223199    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:50.237963    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:50.237971    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:50.249243    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:50.249256    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:35:50.273024    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:50.273032    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:50.308856    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:50.308863    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:50.313429    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:50.313436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:50.327440    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:50.327453    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:52.843367    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:35:57.846051    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:35:57.846489    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:35:57.904799    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:35:57.904920    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:35:57.920967    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:35:57.921059    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:35:57.934392    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:35:57.934461    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:35:57.945382    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:35:57.945455    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:35:57.956402    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:35:57.956473    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:35:57.968865    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:35:57.968935    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:35:57.978999    8914 logs.go:276] 0 containers: []
	W0812 03:35:57.979012    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:35:57.979065    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:35:57.989188    8914 logs.go:276] 0 containers: []
	W0812 03:35:57.989202    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:35:57.989210    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:35:57.989216    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:35:58.003761    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:35:58.003774    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:35:58.015366    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:35:58.015376    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:35:58.019748    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:35:58.019756    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:35:58.055177    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:35:58.055191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:35:58.070281    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:35:58.070291    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:35:58.085607    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:35:58.085617    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:35:58.122314    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:35:58.122324    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:35:58.146772    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:35:58.146782    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:35:58.158080    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:35:58.158090    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:35:58.172252    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:35:58.172264    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:35:58.183728    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:35:58.183742    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:35:58.195292    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:35:58.195304    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:35:58.207238    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:35:58.207248    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:35:58.224834    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:35:58.224843    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:00.752159    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:05.754498    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:05.754888    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:05.803868    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:05.803988    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:05.823848    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:05.823916    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:05.837450    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:05.837519    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:05.853906    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:05.853963    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:05.864989    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:05.865054    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:05.875881    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:05.875950    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:05.886354    8914 logs.go:276] 0 containers: []
	W0812 03:36:05.886365    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:05.886422    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:05.896371    8914 logs.go:276] 0 containers: []
	W0812 03:36:05.896384    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:05.896392    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:05.896400    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:05.908697    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:05.908710    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:05.922572    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:05.922584    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:05.940032    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:05.940044    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:05.944921    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:05.944930    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:05.979676    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:05.979688    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:06.004611    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:06.004621    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:06.029961    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:06.029968    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:06.044969    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:06.044980    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:06.059484    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:06.059494    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:06.071054    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:06.071063    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:06.083834    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:06.083849    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:06.102992    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:06.103005    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:06.138800    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:06.138807    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:06.153587    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:06.153598    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:08.673214    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:13.675684    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:13.675838    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:13.687441    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:13.687515    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:13.707065    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:13.707130    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:13.717930    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:13.718004    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:13.729089    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:13.729164    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:13.739835    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:13.739892    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:13.750567    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:13.750625    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:13.760755    8914 logs.go:276] 0 containers: []
	W0812 03:36:13.760770    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:13.760826    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:13.774293    8914 logs.go:276] 0 containers: []
	W0812 03:36:13.774306    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:13.774313    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:13.774318    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:13.799090    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:13.799102    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:13.813929    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:13.813940    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:13.826187    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:13.826197    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:13.841276    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:13.841287    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:13.853004    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:13.853016    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:13.888424    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:13.888438    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:13.903070    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:13.903080    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:13.921388    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:13.921398    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:13.947066    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:13.947072    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:13.951382    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:13.951390    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:13.965191    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:13.965202    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:13.977537    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:13.977548    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:14.016030    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:14.016037    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:14.030999    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:14.031010    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:16.545004    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:21.547820    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:21.548181    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:21.582444    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:21.582590    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:21.602838    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:21.602939    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:21.617582    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:21.617653    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:21.629860    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:21.629935    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:21.641168    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:21.641229    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:21.652402    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:21.652463    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:21.662930    8914 logs.go:276] 0 containers: []
	W0812 03:36:21.662943    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:21.663003    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:21.672888    8914 logs.go:276] 0 containers: []
	W0812 03:36:21.672898    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:21.672906    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:21.672912    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:21.677505    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:21.677512    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:21.689265    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:21.689279    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:21.710223    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:21.710236    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:21.748743    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:21.748751    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:21.783681    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:21.783695    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:21.797755    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:21.797766    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:21.809388    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:21.809399    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:21.823356    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:21.823369    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:21.838185    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:21.838196    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:21.872647    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:21.872658    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:21.897785    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:21.897795    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:21.913479    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:21.913492    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:21.925989    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:21.925999    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:21.937200    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:21.937213    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:24.453729    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:29.455952    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:29.456382    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:29.495390    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:29.495518    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:29.517250    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:29.517349    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:29.534522    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:29.534596    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:29.546806    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:29.546873    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:29.557786    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:29.557847    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:29.568318    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:29.568374    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:29.578844    8914 logs.go:276] 0 containers: []
	W0812 03:36:29.578857    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:29.578924    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:29.590032    8914 logs.go:276] 0 containers: []
	W0812 03:36:29.590045    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:29.590057    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:29.590064    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:29.604055    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:29.604068    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:29.621622    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:29.621634    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:29.639197    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:29.639210    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:29.650783    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:29.650798    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:29.674541    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:29.674549    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:29.686174    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:29.686190    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:29.698143    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:29.698154    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:29.726440    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:29.726451    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:29.737830    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:29.737843    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:29.775871    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:29.775878    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:29.813265    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:29.813279    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:29.827794    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:29.827811    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:29.843527    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:29.843541    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:29.856491    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:29.856503    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:32.363328    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:37.365515    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:37.365621    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:37.379134    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:37.379204    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:37.390247    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:37.390314    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:37.402164    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:37.402237    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:37.414488    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:37.414561    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:37.425715    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:37.425779    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:37.438150    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:37.438229    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:37.458267    8914 logs.go:276] 0 containers: []
	W0812 03:36:37.458283    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:37.458346    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:37.470651    8914 logs.go:276] 0 containers: []
	W0812 03:36:37.470665    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:37.470672    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:37.470679    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:37.476968    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:37.476982    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:37.497403    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:37.497417    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:37.510372    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:37.510384    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:37.538812    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:37.538829    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:37.556700    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:37.556719    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:37.572983    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:37.572998    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:37.592152    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:37.592170    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:37.605479    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:37.605498    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:37.645330    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:37.645353    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:37.687265    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:37.687278    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:37.699291    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:37.699304    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:37.724486    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:37.724500    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:37.748731    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:37.748745    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:37.770997    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:37.771009    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:40.287171    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:45.289519    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:45.289655    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:45.301807    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:45.301883    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:45.312529    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:45.312594    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:45.322854    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:45.322925    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:45.333121    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:45.333189    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:45.345288    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:45.345354    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:45.355737    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:45.355801    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:45.365278    8914 logs.go:276] 0 containers: []
	W0812 03:36:45.365292    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:45.365352    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:45.375738    8914 logs.go:276] 0 containers: []
	W0812 03:36:45.375749    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:45.375757    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:45.375763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:45.396246    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:45.396256    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:45.420633    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:45.420639    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:45.434989    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:45.435001    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:45.440079    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:45.440085    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:45.452322    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:45.452332    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:45.463454    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:45.463467    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:45.501985    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:45.501997    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:45.516715    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:45.516728    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:45.531772    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:45.531785    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:45.546475    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:45.546488    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:45.572885    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:45.572896    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:45.587388    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:45.587403    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:45.598840    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:45.598852    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:45.610863    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:45.610874    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:48.149033    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:53.151717    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:36:53.152136    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:36:53.190749    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:36:53.190880    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:36:53.213189    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:36:53.213303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:36:53.229386    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:36:53.229465    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:36:53.244048    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:36:53.244119    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:36:53.254969    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:36:53.255040    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:36:53.265719    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:36:53.265789    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:36:53.275865    8914 logs.go:276] 0 containers: []
	W0812 03:36:53.275879    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:36:53.275965    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:36:53.288458    8914 logs.go:276] 0 containers: []
	W0812 03:36:53.288468    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:36:53.288475    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:36:53.288480    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:36:53.306643    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:36:53.306655    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:36:53.318375    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:36:53.318387    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:36:53.336166    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:36:53.336179    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:36:53.374579    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:36:53.374589    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:36:53.389572    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:36:53.389583    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:36:53.414163    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:36:53.414174    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:36:53.425835    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:36:53.425846    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:36:53.437522    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:36:53.437535    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:36:53.441921    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:36:53.441932    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:36:53.479579    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:36:53.479589    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:36:53.492226    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:36:53.492239    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:36:53.515123    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:36:53.515130    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:36:53.533638    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:36:53.533650    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:36:53.545677    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:36:53.545691    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:36:56.062396    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:01.064607    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:01.064855    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:01.099892    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:01.099992    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:01.122462    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:01.122528    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:01.137578    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:01.137648    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:01.148797    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:01.148861    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:01.159009    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:01.159073    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:01.170377    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:01.170446    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:01.180646    8914 logs.go:276] 0 containers: []
	W0812 03:37:01.180655    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:01.180724    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:01.190573    8914 logs.go:276] 0 containers: []
	W0812 03:37:01.190584    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:01.190592    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:01.190599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:01.204290    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:01.204301    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:01.218369    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:01.218383    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:01.231408    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:01.231420    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:01.235987    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:01.235996    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:01.260992    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:01.261003    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:01.273334    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:01.273345    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:01.284659    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:01.284671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:01.320693    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:01.320703    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:01.334752    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:01.334763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:01.348988    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:01.348998    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:01.360429    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:01.360440    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:01.394369    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:01.394380    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:01.405908    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:01.405920    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:01.430907    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:01.430918    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:03.957775    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:08.960084    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:08.960554    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:09.003022    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:09.003181    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:09.023718    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:09.023837    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:09.039384    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:09.039451    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:09.051800    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:09.051875    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:09.062533    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:09.062602    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:09.073360    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:09.073431    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:09.083751    8914 logs.go:276] 0 containers: []
	W0812 03:37:09.083765    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:09.083825    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:09.096154    8914 logs.go:276] 0 containers: []
	W0812 03:37:09.096166    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:09.096173    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:09.096179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:09.108251    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:09.108266    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:09.122439    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:09.122452    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:09.134289    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:09.134299    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:09.145962    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:09.145974    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:09.150733    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:09.150741    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:09.184314    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:09.184326    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:09.197307    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:09.197324    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:09.211888    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:09.211901    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:09.223139    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:09.223154    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:09.241071    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:09.241087    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:09.279103    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:09.279110    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:09.302912    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:09.302922    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:09.323776    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:09.323786    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:09.348653    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:09.348665    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:11.875006    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:16.877213    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:16.877354    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:16.888574    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:16.888646    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:16.904022    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:16.904089    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:16.915664    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:16.915729    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:16.926403    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:16.926470    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:16.944759    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:16.944825    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:16.955779    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:16.955846    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:16.965993    8914 logs.go:276] 0 containers: []
	W0812 03:37:16.966004    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:16.966060    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:16.977338    8914 logs.go:276] 0 containers: []
	W0812 03:37:16.977352    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:16.977361    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:16.977368    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:16.993897    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:16.993910    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:17.008409    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:17.008424    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:17.027405    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:17.027418    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:17.054197    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:17.054218    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:17.093007    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:17.093021    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:17.107935    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:17.107948    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:17.127757    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:17.127775    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:17.141493    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:17.141505    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:17.160146    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:17.160157    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:17.173032    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:17.173045    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:17.213441    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:17.213462    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:17.227520    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:17.227536    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:17.241493    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:17.241505    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:17.266109    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:17.266120    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:19.770433    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:24.772379    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:24.772448    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:24.784408    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:24.784467    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:24.804288    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:24.804351    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:24.815955    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:24.816015    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:24.831902    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:24.831972    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:24.843795    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:24.843848    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:24.855254    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:24.855306    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:24.866296    8914 logs.go:276] 0 containers: []
	W0812 03:37:24.866307    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:24.866357    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:24.878773    8914 logs.go:276] 0 containers: []
	W0812 03:37:24.878784    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:24.878792    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:24.878799    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:24.919260    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:24.919274    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:24.945909    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:24.945918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:24.968409    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:24.968419    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:24.984454    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:24.984465    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:25.003253    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:25.003269    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:25.016099    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:25.016110    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:25.056125    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:25.056141    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:25.060737    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:25.060742    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:25.075148    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:25.075163    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:25.089615    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:25.089627    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:25.101755    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:25.101765    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:25.119924    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:25.119939    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:25.134923    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:25.134933    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:25.147182    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:25.147195    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:27.670955    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:32.671886    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:32.672117    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:32.699242    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:32.699367    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:32.715631    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:32.715704    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:32.727542    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:32.727617    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:32.738862    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:32.738940    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:32.749790    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:32.749857    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:32.761063    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:32.761135    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:32.771333    8914 logs.go:276] 0 containers: []
	W0812 03:37:32.771345    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:32.771400    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:32.781313    8914 logs.go:276] 0 containers: []
	W0812 03:37:32.781324    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:32.781333    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:32.781339    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:32.817993    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:32.818006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:32.830134    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:32.830145    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:32.845927    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:32.845938    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:32.870317    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:32.870325    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:32.884080    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:32.884092    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:32.902438    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:32.902452    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:32.914095    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:32.914106    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:32.953079    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:32.953092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:32.958041    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:32.958051    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:32.971770    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:32.971781    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:32.996562    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:32.996576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:33.016793    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:33.016804    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:33.031153    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:33.031164    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:33.042976    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:33.042987    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:35.556721    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:40.559116    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:40.559448    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:40.603624    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:40.603746    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:40.620412    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:40.620497    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:40.632246    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:40.632315    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:40.642887    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:40.642952    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:40.653453    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:40.653524    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:40.664988    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:40.665054    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:40.675588    8914 logs.go:276] 0 containers: []
	W0812 03:37:40.675600    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:40.675660    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:40.690446    8914 logs.go:276] 0 containers: []
	W0812 03:37:40.690462    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:40.690469    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:40.690473    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:40.702963    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:40.702974    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:40.728642    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:40.728653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:40.747048    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:40.747061    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:40.759088    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:40.759101    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:40.771045    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:40.771058    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:40.786069    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:40.786084    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:40.798233    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:40.798244    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:40.822452    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:40.822459    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:40.836696    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:40.836708    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:40.853066    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:40.853080    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:40.864096    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:40.864108    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:40.887440    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:40.887452    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:40.926579    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:40.926590    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:40.931613    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:40.931620    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:43.469523    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:48.471762    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:48.471952    8914 kubeadm.go:597] duration metric: took 4m3.935350292s to restartPrimaryControlPlane
	W0812 03:37:48.472062    8914 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 03:37:48.472113    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0812 03:37:49.431625    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 03:37:49.437144    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:37:49.440265    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:37:49.443306    8914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:37:49.443312    8914 kubeadm.go:157] found existing configuration files:
	
	I0812 03:37:49.443335    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf
	I0812 03:37:49.446274    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:37:49.446298    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:37:49.449307    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf
	I0812 03:37:49.451894    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:37:49.451920    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:37:49.455214    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf
	I0812 03:37:49.458163    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:37:49.458186    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:37:49.461030    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf
	I0812 03:37:49.463595    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:37:49.463613    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:37:49.466765    8914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 03:37:49.484672    8914 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0812 03:37:49.484717    8914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 03:37:49.533130    8914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 03:37:49.533196    8914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 03:37:49.533248    8914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 03:37:49.581726    8914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 03:37:49.586004    8914 out.go:204]   - Generating certificates and keys ...
	I0812 03:37:49.586035    8914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 03:37:49.586081    8914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 03:37:49.586127    8914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 03:37:49.586165    8914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 03:37:49.586228    8914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 03:37:49.586255    8914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 03:37:49.586312    8914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 03:37:49.586348    8914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 03:37:49.586402    8914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 03:37:49.586436    8914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 03:37:49.586454    8914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 03:37:49.586480    8914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 03:37:49.788183    8914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 03:37:49.984845    8914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 03:37:50.228221    8914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 03:37:50.283704    8914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 03:37:50.313456    8914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 03:37:50.313503    8914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 03:37:50.313524    8914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 03:37:50.408152    8914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 03:37:50.412120    8914 out.go:204]   - Booting up control plane ...
	I0812 03:37:50.412166    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 03:37:50.412208    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 03:37:50.412244    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 03:37:50.412314    8914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 03:37:50.412401    8914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 03:37:55.414882    8914 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002391 seconds
	I0812 03:37:55.414968    8914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 03:37:55.419934    8914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 03:37:55.936193    8914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 03:37:55.936437    8914 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-969000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 03:37:56.442298    8914 kubeadm.go:310] [bootstrap-token] Using token: qvizt4.2y8zyl62kvg199ij
	I0812 03:37:56.448908    8914 out.go:204]   - Configuring RBAC rules ...
	I0812 03:37:56.448974    8914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 03:37:56.449034    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 03:37:56.450704    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 03:37:56.452216    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 03:37:56.453202    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 03:37:56.453957    8914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 03:37:56.457058    8914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 03:37:56.628943    8914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 03:37:56.848272    8914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 03:37:56.848736    8914 kubeadm.go:310] 
	I0812 03:37:56.848768    8914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 03:37:56.848772    8914 kubeadm.go:310] 
	I0812 03:37:56.848812    8914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 03:37:56.848815    8914 kubeadm.go:310] 
	I0812 03:37:56.848832    8914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 03:37:56.848865    8914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 03:37:56.848894    8914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 03:37:56.848899    8914 kubeadm.go:310] 
	I0812 03:37:56.848924    8914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 03:37:56.848927    8914 kubeadm.go:310] 
	I0812 03:37:56.848960    8914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 03:37:56.848965    8914 kubeadm.go:310] 
	I0812 03:37:56.848988    8914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 03:37:56.849020    8914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 03:37:56.849086    8914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 03:37:56.849090    8914 kubeadm.go:310] 
	I0812 03:37:56.849161    8914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 03:37:56.849199    8914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 03:37:56.849258    8914 kubeadm.go:310] 
	I0812 03:37:56.849296    8914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qvizt4.2y8zyl62kvg199ij \
	I0812 03:37:56.849343    8914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a \
	I0812 03:37:56.849356    8914 kubeadm.go:310] 	--control-plane 
	I0812 03:37:56.849358    8914 kubeadm.go:310] 
	I0812 03:37:56.849429    8914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 03:37:56.849511    8914 kubeadm.go:310] 
	I0812 03:37:56.849601    8914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qvizt4.2y8zyl62kvg199ij \
	I0812 03:37:56.849660    8914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a 
	I0812 03:37:56.849719    8914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 03:37:56.849726    8914 cni.go:84] Creating CNI manager for ""
	I0812 03:37:56.849733    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:37:56.854067    8914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 03:37:56.858066    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 03:37:56.861163    8914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 03:37:56.865930    8914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 03:37:56.865975    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 03:37:56.866004    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-969000 minikube.k8s.io/updated_at=2024_08_12T03_37_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=running-upgrade-969000 minikube.k8s.io/primary=true
	I0812 03:37:56.913914    8914 kubeadm.go:1113] duration metric: took 47.977583ms to wait for elevateKubeSystemPrivileges
	I0812 03:37:56.913919    8914 ops.go:34] apiserver oom_adj: -16
	I0812 03:37:56.913934    8914 kubeadm.go:394] duration metric: took 4m12.395964167s to StartCluster
	I0812 03:37:56.913944    8914 settings.go:142] acquiring lock: {Name:mk405bca217b1764467e7caec79ed71135791229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:56.914113    8914 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:37:56.914506    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:56.914709    8914 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:37:56.914714    8914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 03:37:56.914750    8914 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-969000"
	I0812 03:37:56.914756    8914 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-969000"
	I0812 03:37:56.914764    8914 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-969000"
	W0812 03:37:56.914768    8914 addons.go:243] addon storage-provisioner should already be in state true
	I0812 03:37:56.914782    8914 host.go:66] Checking if "running-upgrade-969000" exists ...
	I0812 03:37:56.914812    8914 config.go:182] Loaded profile config "running-upgrade-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:37:56.914847    8914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-969000"
	I0812 03:37:56.915684    8914 kapi.go:59] client config for running-upgrade-969000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a04e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:37:56.915800    8914 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-969000"
	W0812 03:37:56.915804    8914 addons.go:243] addon default-storageclass should already be in state true
	I0812 03:37:56.915817    8914 host.go:66] Checking if "running-upgrade-969000" exists ...
	I0812 03:37:56.919191    8914 out.go:177] * Verifying Kubernetes components...
	I0812 03:37:56.919562    8914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 03:37:56.923239    8914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 03:37:56.923245    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:37:56.927027    8914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:56.930100    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:56.934077    8914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:37:56.934084    8914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 03:37:56.934090    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:37:57.005512    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:37:57.010455    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:37:57.010503    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:57.014543    8914 api_server.go:72] duration metric: took 99.822542ms to wait for apiserver process to appear ...
	I0812 03:37:57.014551    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:37:57.014558    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:57.042162    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 03:37:57.051930    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:38:02.016673    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:02.016715    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:07.017419    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:07.017473    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:12.017889    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:12.017943    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:17.018509    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:17.018558    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:22.019309    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:22.019342    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:27.020234    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:27.020260    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0812 03:38:27.380650    8914 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0812 03:38:27.385069    8914 out.go:177] * Enabled addons: storage-provisioner
	I0812 03:38:27.392954    8914 addons.go:510] duration metric: took 30.478657166s for enable addons: enabled=[storage-provisioner]
	I0812 03:38:32.021428    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:32.021486    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:37.022911    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:37.022973    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:42.024947    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:42.024989    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:47.026024    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:47.026052    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:52.027223    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:52.027270    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:57.027770    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:57.027926    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:57.040430    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:38:57.040505    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:57.051810    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:38:57.051874    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:57.062739    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:38:57.062815    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:57.073272    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:38:57.073341    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:57.085044    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:38:57.085125    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:57.095906    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:38:57.095967    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:57.106857    8914 logs.go:276] 0 containers: []
	W0812 03:38:57.106867    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:57.106919    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:57.117168    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:38:57.117184    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:38:57.117191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:38:57.131311    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:38:57.131324    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:38:57.143682    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:38:57.143695    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:38:57.155907    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:38:57.155919    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:38:57.168384    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:38:57.168397    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:38:57.185634    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:57.185645    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:57.223568    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:57.223574    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:57.227974    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:57.227983    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:57.264240    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:38:57.264255    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:38:57.278107    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:38:57.278119    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:38:57.289759    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:38:57.289775    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:38:57.304901    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:57.304912    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:57.328641    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:38:57.328650    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:59.842187    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:04.844588    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:04.844750    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:04.858386    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:04.858464    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:04.871278    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:04.871359    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:04.883036    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:04.883102    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:04.893772    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:04.893843    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:04.904433    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:04.904498    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:04.914831    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:04.914907    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:04.925025    8914 logs.go:276] 0 containers: []
	W0812 03:39:04.925039    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:04.925096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:04.935646    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:04.935663    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:04.935669    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:04.950090    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:04.950100    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:04.961511    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:04.961525    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:04.997705    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:04.997721    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:05.002353    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:05.002360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:05.037296    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:05.037309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:05.051890    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:05.051900    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:05.063838    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:05.063849    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:05.076035    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:05.076047    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:05.101073    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:05.101081    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:05.116210    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:05.116222    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:05.127658    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:05.127668    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:05.144997    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:05.145010    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:07.658758    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:12.661048    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:12.661281    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:12.683826    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:12.683914    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:12.698170    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:12.698247    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:12.709138    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:12.709204    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:12.721420    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:12.721494    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:12.731900    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:12.731961    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:12.742724    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:12.742781    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:12.753068    8914 logs.go:276] 0 containers: []
	W0812 03:39:12.753080    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:12.753143    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:12.763652    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:12.763670    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:12.763675    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:12.777427    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:12.777440    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:12.789381    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:12.789392    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:12.806694    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:12.806706    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:12.818103    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:12.818115    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:12.841650    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:12.841658    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:12.880766    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:12.880778    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:12.886028    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:12.886035    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:12.900627    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:12.900637    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:12.914617    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:12.914629    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:12.926354    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:12.926364    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:12.961453    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:12.961466    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:12.974262    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:12.974273    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:15.491056    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:20.493814    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:20.494020    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:20.515782    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:20.515879    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:20.531488    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:20.531573    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:20.544260    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:20.544324    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:20.557022    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:20.557087    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:20.567805    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:20.567870    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:20.578311    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:20.578371    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:20.588454    8914 logs.go:276] 0 containers: []
	W0812 03:39:20.588465    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:20.588518    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:20.599413    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:20.599426    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:20.599431    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:20.613448    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:20.613460    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:20.625327    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:20.625338    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:20.640529    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:20.640541    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:20.652586    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:20.652598    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:20.664029    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:20.664040    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:20.699989    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:20.700018    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:20.704795    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:20.704801    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:20.723137    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:20.723147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:20.748153    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:20.748173    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:20.761081    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:20.761092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:20.796405    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:20.796417    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:20.808734    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:20.808749    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:23.328867    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:28.331158    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:28.331526    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:28.368942    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:28.369093    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:28.388144    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:28.388238    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:28.403268    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:28.403345    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:28.415989    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:28.416065    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:28.430419    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:28.430489    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:28.441235    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:28.441303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:28.452772    8914 logs.go:276] 0 containers: []
	W0812 03:39:28.452783    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:28.452842    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:28.463793    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:28.463812    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:28.463819    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:28.479351    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:28.479362    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:28.500398    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:28.500409    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:28.523807    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:28.523818    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:28.559299    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:28.559314    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:28.563998    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:28.564006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:28.579192    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:28.579208    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:28.590740    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:28.590754    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:28.602805    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:28.602820    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:28.639265    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:28.639278    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:28.653818    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:28.653832    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:28.666368    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:28.666380    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:28.684458    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:28.684472    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:31.198189    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:36.198635    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:36.198840    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:36.222171    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:36.222254    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:36.234030    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:36.234096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:36.244486    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:36.244540    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:36.254736    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:36.254805    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:36.264979    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:36.265036    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:36.275915    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:36.275985    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:36.286276    8914 logs.go:276] 0 containers: []
	W0812 03:39:36.286287    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:36.286347    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:36.296489    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:36.296503    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:36.296508    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:36.301179    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:36.301188    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:36.315426    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:36.315436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:36.329444    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:36.329454    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:36.349019    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:36.349031    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:36.363448    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:36.363458    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:36.374752    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:36.374761    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:36.392084    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:36.392092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:36.417133    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:36.417147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:36.429831    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:36.429843    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:36.470611    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:36.470625    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:36.506185    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:36.506199    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:36.518198    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:36.518209    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:39.032251    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:44.033247    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:44.033461    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:44.054487    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:44.054602    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:44.071069    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:44.071145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:44.082934    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:44.083009    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:44.094087    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:44.094139    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:44.105267    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:44.105336    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:44.115842    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:44.115896    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:44.125929    8914 logs.go:276] 0 containers: []
	W0812 03:39:44.125941    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:44.125993    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:44.136587    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:44.136602    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:44.136607    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:44.160817    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:44.160826    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:44.195084    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:44.195098    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:44.209019    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:44.209030    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:44.224342    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:44.224352    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:44.236233    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:44.236245    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:44.247429    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:44.247439    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:44.265430    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:44.265441    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:44.277529    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:44.277539    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:44.288742    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:44.288752    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:44.325632    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:44.325644    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:44.330399    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:44.330405    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:44.345576    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:44.345585    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:46.859756    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:51.861942    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:51.862143    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:51.884564    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:51.884643    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:51.897959    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:51.898031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:51.908985    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:51.909057    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:51.920286    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:51.920348    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:51.930755    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:51.930815    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:51.941152    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:51.941206    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:51.951848    8914 logs.go:276] 0 containers: []
	W0812 03:39:51.951860    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:51.951917    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:51.962484    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:51.962500    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:51.962506    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:52.000689    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:52.000698    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:52.005838    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:52.005844    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:52.020211    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:52.020221    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:52.033420    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:52.033431    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:52.048006    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:52.048016    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:52.066877    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:52.066887    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:52.088794    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:52.088804    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:52.100394    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:52.100405    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:52.112343    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:52.112353    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:52.150690    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:52.150701    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:52.164742    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:52.164756    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:52.177397    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:52.177411    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:54.704386    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:59.706642    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:59.706738    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:59.719469    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:59.719536    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:59.731067    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:59.731137    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:59.744189    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:59.744268    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:59.755415    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:59.755477    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:59.766562    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:59.766626    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:59.778440    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:59.778510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:59.793594    8914 logs.go:276] 0 containers: []
	W0812 03:39:59.793605    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:59.793666    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:59.805058    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:59.805078    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:59.805083    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:59.817086    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:59.817098    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:59.840670    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:59.840681    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:59.852484    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:39:59.852498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:39:59.863671    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:59.863684    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:59.880846    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:59.880858    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:59.902154    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:59.902167    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:59.939795    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:59.939804    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:59.974622    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:59.974632    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:59.987108    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:39:59.987121    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:39:59.998672    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:59.998684    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:00.010228    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:00.010241    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:00.022436    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:00.022450    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:00.037069    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:00.037080    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:00.041628    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:00.041635    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:02.558401    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:07.559289    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:07.559500    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:07.588894    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:07.589022    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:07.607409    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:07.607496    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:07.621833    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:07.621910    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:07.633986    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:07.634061    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:07.644994    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:07.645062    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:07.655967    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:07.656034    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:07.666272    8914 logs.go:276] 0 containers: []
	W0812 03:40:07.666285    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:07.666335    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:07.676909    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:07.676927    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:07.676932    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:07.688251    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:07.688261    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:07.724387    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:07.724397    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:07.736014    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:07.736026    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:07.747299    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:07.747309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:07.762668    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:07.762682    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:07.780464    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:07.780476    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:07.792386    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:07.792397    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:07.829420    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:07.829436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:07.851256    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:07.851267    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:07.868938    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:07.868949    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:07.880504    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:07.880517    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:07.904257    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:07.904266    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:07.908839    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:07.908847    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:07.923797    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:07.923808    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:10.437516    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:15.439855    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:15.440198    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:15.474014    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:15.474145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:15.492252    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:15.492350    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:15.507004    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:15.507088    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:15.519501    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:15.519567    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:15.530122    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:15.530189    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:15.540834    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:15.540906    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:15.552989    8914 logs.go:276] 0 containers: []
	W0812 03:40:15.553003    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:15.553068    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:15.563860    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:15.563878    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:15.563882    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:15.600109    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:15.600118    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:15.615304    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:15.615319    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:15.627473    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:15.627484    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:15.639545    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:15.639557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:15.651990    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:15.652001    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:15.674814    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:15.674825    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:15.689944    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:15.689956    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:15.694388    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:15.694393    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:15.705643    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:15.705657    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:15.741089    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:15.741099    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:15.755611    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:15.755626    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:15.767087    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:15.767101    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:15.778164    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:15.778176    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:15.801751    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:15.801769    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:18.314566    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:23.316912    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:23.317065    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:23.344716    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:23.344839    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:23.359929    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:23.360007    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:23.372382    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:23.372455    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:23.383401    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:23.383466    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:23.394106    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:23.394188    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:23.404316    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:23.404375    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:23.414673    8914 logs.go:276] 0 containers: []
	W0812 03:40:23.414683    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:23.414737    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:23.425599    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:23.425616    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:23.425621    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:23.460959    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:23.460970    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:23.476443    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:23.476456    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:23.500814    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:23.500823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:23.512267    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:23.512277    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:23.527180    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:23.527190    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:23.539067    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:23.539077    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:23.543593    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:23.543600    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:23.580956    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:23.580966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:23.595836    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:23.595849    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:23.610168    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:23.610178    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:23.622123    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:23.622134    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:23.634727    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:23.634738    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:23.646936    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:23.646949    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:23.664952    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:23.664966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:26.190294    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:31.192427    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:31.192620    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:31.210582    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:31.210676    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:31.224763    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:31.224842    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:31.236793    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:31.236862    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:31.252555    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:31.252627    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:31.262915    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:31.262972    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:31.274117    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:31.274180    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:31.284495    8914 logs.go:276] 0 containers: []
	W0812 03:40:31.284506    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:31.284555    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:31.294605    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:31.294621    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:31.294627    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:31.299129    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:31.299136    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:31.317062    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:31.317072    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:31.328816    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:31.328827    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:31.340420    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:31.340434    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:31.366674    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:31.366683    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:31.379107    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:31.379117    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:31.396002    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:31.396012    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:31.407590    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:31.407600    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:31.444555    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:31.444563    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:31.480178    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:31.480189    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:31.492662    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:31.492675    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:31.507049    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:31.507061    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:31.521849    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:31.521860    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:31.533618    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:31.533629    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:34.054236    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:39.056493    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:39.056703    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:39.072815    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:39.072900    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:39.085853    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:39.085927    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:39.100635    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:39.100709    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:39.112262    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:39.112318    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:39.122910    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:39.122977    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:39.133836    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:39.133900    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:39.144028    8914 logs.go:276] 0 containers: []
	W0812 03:40:39.144040    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:39.144096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:39.157788    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:39.157807    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:39.157811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:39.193651    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:39.193665    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:39.206287    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:39.206298    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:39.218462    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:39.218475    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:39.232657    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:39.232671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:39.256393    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:39.256401    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:39.293289    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:39.293304    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:39.297851    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:39.297857    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:39.312879    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:39.312893    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:39.330412    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:39.330422    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:39.341682    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:39.341696    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:39.353418    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:39.353432    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:39.367566    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:39.367578    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:39.379608    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:39.379623    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:39.391058    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:39.391072    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:41.907999    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:46.910279    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:46.910396    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:46.921750    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:46.921816    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:46.932092    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:46.932157    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:46.942930    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:46.943002    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:46.953093    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:46.953156    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:46.963972    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:46.964031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:46.975132    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:46.975208    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:46.985733    8914 logs.go:276] 0 containers: []
	W0812 03:40:46.985748    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:46.985800    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:46.995822    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:46.995839    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:46.995843    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:47.009989    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:47.010002    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:47.024042    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:47.024052    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:47.036361    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:47.036373    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:47.054563    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:47.054576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:47.066546    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:47.066557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:47.078452    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:47.078462    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:47.116092    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:47.116099    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:47.152332    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:47.152342    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:47.163900    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:47.163909    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:47.188624    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:47.188633    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:47.193464    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:47.193474    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:47.204777    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:47.204792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:47.219905    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:47.219915    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:47.237583    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:47.237594    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:49.752156    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:54.754408    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:54.754562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:54.774550    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:54.774638    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:54.789064    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:54.789139    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:54.807448    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:54.807510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:54.817975    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:54.818038    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:54.828863    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:54.828934    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:54.848313    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:54.848429    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:54.864206    8914 logs.go:276] 0 containers: []
	W0812 03:40:54.864220    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:54.864274    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:54.878165    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:54.878183    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:54.878187    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:54.890189    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:54.890205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:54.901854    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:54.901863    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:54.913617    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:54.913626    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:54.930833    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:54.930842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:54.942910    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:54.942919    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:54.965977    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:54.965983    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:54.980103    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:54.980117    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:54.985058    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:54.985064    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:54.996729    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:54.996744    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:55.012377    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:55.012386    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:55.050156    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:55.050164    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:55.091208    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:55.091224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:55.106156    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:55.106168    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:55.124181    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:55.124192    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:57.642309    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:02.644584    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:02.644813    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:02.671842    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:02.671947    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:02.689324    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:02.689413    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:02.702873    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:02.702945    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:02.719422    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:02.719488    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:02.729688    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:02.729759    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:02.740168    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:02.740237    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:02.750421    8914 logs.go:276] 0 containers: []
	W0812 03:41:02.750441    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:02.750511    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:02.761552    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:02.761571    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:02.761576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:02.773810    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:02.773823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:02.785632    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:02.785645    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:02.810428    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:02.810436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:02.824584    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:02.824599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:02.836354    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:02.836370    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:02.852693    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:02.852702    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:02.864183    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:02.864196    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:02.903560    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:02.903576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:02.926638    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:02.926653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:02.938177    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:02.938186    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:02.950887    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:02.950901    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:02.955524    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:02.955530    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:02.971549    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:02.971561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:02.989673    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:02.989684    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:05.528573    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:10.529809    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:10.530031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:10.553995    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:10.554079    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:10.570106    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:10.570177    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:10.582042    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:10.582105    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:10.592358    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:10.592417    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:10.603424    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:10.603486    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:10.614912    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:10.614982    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:10.631694    8914 logs.go:276] 0 containers: []
	W0812 03:41:10.631705    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:10.631753    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:10.642336    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:10.642355    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:10.642360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:10.679347    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:10.679363    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:10.691749    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:10.691763    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:10.704759    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:10.704772    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:10.722550    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:10.722561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:10.740230    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:10.740244    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:10.751653    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:10.751666    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:10.775739    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:10.775747    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:10.780417    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:10.780423    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:10.791710    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:10.791722    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:10.810238    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:10.810249    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:10.826191    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:10.826202    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:10.861588    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:10.861596    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:10.877815    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:10.877829    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:10.891685    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:10.891698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:13.405979    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:18.408191    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:18.408409    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:18.431864    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:18.431962    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:18.448045    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:18.448133    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:18.460439    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:18.460513    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:18.471882    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:18.471954    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:18.483144    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:18.483212    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:18.493866    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:18.493934    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:18.503651    8914 logs.go:276] 0 containers: []
	W0812 03:41:18.503662    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:18.503718    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:18.514409    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:18.514427    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:18.514432    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:18.519028    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:18.519036    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:18.554946    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:18.554957    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:18.566879    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:18.566889    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:18.581625    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:18.581636    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:18.599459    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:18.599469    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:18.614048    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:18.614060    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:18.628292    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:18.628303    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:18.640563    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:18.640574    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:18.665776    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:18.665785    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:18.677841    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:18.677851    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:18.715131    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:18.715146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:18.727416    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:18.727430    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:18.739380    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:18.739389    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:18.751243    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:18.751258    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:21.264979    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:26.267099    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:26.267270    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:26.281542    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:26.281621    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:26.293125    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:26.293201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:26.304217    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:26.304290    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:26.329861    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:26.329935    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:26.340254    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:26.340317    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:26.350788    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:26.350859    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:26.361023    8914 logs.go:276] 0 containers: []
	W0812 03:41:26.361038    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:26.361101    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:26.374816    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:26.374832    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:26.374837    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:26.386443    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:26.386453    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:26.398597    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:26.398608    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:26.419839    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:26.419850    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:26.442727    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:26.442735    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:26.455987    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:26.455998    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:26.495415    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:26.495423    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:26.509682    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:26.509693    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:26.522047    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:26.522059    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:26.557250    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:26.557264    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:26.573142    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:26.573153    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:26.584637    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:26.584647    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:26.589270    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:26.589279    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:26.603846    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:26.603857    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:26.615894    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:26.615906    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:29.130071    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:34.132259    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:34.132562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:34.166041    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:34.166132    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:34.181766    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:34.181835    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:34.197104    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:34.197176    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:34.212065    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:34.212138    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:34.223167    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:34.223246    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:34.234261    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:34.234326    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:34.244192    8914 logs.go:276] 0 containers: []
	W0812 03:41:34.244204    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:34.244266    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:34.255343    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:34.255362    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:34.255368    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:34.269776    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:34.269787    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:34.284570    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:34.284580    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:34.296093    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:34.296105    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:34.300891    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:34.300898    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:34.334495    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:34.334511    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:34.349232    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:34.349243    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:34.361240    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:34.361250    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:34.372775    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:34.372786    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:34.389906    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:34.389917    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:34.427578    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:34.427591    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:34.440676    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:34.440689    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:34.454150    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:34.454162    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:34.466976    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:34.467002    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:34.492291    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:34.492312    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:37.006942    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:42.009098    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:42.009303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:42.026848    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:42.026938    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:42.043146    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:42.043223    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:42.054975    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:42.055045    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:42.065341    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:42.065417    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:42.076046    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:42.076107    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:42.087906    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:42.087971    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:42.099081    8914 logs.go:276] 0 containers: []
	W0812 03:41:42.099092    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:42.099152    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:42.109539    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:42.109558    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:42.109564    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:42.147196    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:42.147207    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:42.183986    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:42.184001    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:42.204383    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:42.204395    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:42.208887    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:42.208894    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:42.219828    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:42.219842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:42.231785    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:42.231794    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:42.256564    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:42.256575    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:42.268168    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:42.268186    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:42.280141    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:42.280152    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:42.294648    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:42.294658    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:42.312288    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:42.312301    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:42.324261    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:42.324272    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:42.342238    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:42.342256    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:42.354044    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:42.354059    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:44.871044    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:49.872865    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:49.872955    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:49.885021    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:49.885089    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:49.896242    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:49.896306    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:49.907870    8914 logs.go:276] 4 containers: [ab95ef87686c 2872fddd2cc9 9df938e3e4be 8d562f33b5e4]
	I0812 03:41:49.907939    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:49.925427    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:49.925493    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:49.937480    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:49.937543    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:49.949892    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:49.949991    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:49.961926    8914 logs.go:276] 0 containers: []
	W0812 03:41:49.961938    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:49.961997    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:49.972935    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:49.972954    8914 logs.go:123] Gathering logs for coredns [ab95ef87686c] ...
	I0812 03:41:49.972959    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab95ef87686c"
	I0812 03:41:49.986073    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:49.986088    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:50.001764    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:50.001781    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:50.015299    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:50.015315    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:50.020354    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:50.020365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:50.033749    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:50.033762    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:50.052566    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:50.052581    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:50.092294    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:50.092313    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:50.107136    8914 logs.go:123] Gathering logs for coredns [2872fddd2cc9] ...
	I0812 03:41:50.107150    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2872fddd2cc9"
	I0812 03:41:50.119668    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:50.119680    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:50.136923    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:50.136936    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:50.150282    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:50.150296    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:50.162870    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:50.162882    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:50.178512    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:50.178524    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:50.205656    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:50.205671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:52.748680    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:57.750973    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:57.754430    8914 out.go:177] 
	W0812 03:41:57.758419    8914 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0812 03:41:57.758428    8914 out.go:239] * 
	* 
	W0812 03:41:57.759094    8914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:41:57.776376    8914 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-969000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-12 03:41:57.856507 -0700 PDT m=+1362.048543335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-969000 -n running-upgrade-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-969000 -n running-upgrade-969000: exit status 2 (15.61508625s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-969000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-421000          | force-systemd-flag-421000 | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-569000              | force-systemd-env-569000  | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-569000           | force-systemd-env-569000  | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT | 12 Aug 24 03:32 PDT |
	| start   | -p docker-flags-150000                | docker-flags-150000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-421000             | force-systemd-flag-421000 | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-421000          | force-systemd-flag-421000 | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT | 12 Aug 24 03:32 PDT |
	| start   | -p cert-expiration-736000             | cert-expiration-736000    | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-150000 ssh               | docker-flags-150000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-150000 ssh               | docker-flags-150000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-150000                | docker-flags-150000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT | 12 Aug 24 03:32 PDT |
	| start   | -p cert-options-348000                | cert-options-348000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-348000 ssh               | cert-options-348000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-348000 -- sudo        | cert-options-348000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-348000                | cert-options-348000       | jenkins | v1.33.1 | 12 Aug 24 03:32 PDT | 12 Aug 24 03:32 PDT |
	| start   | -p running-upgrade-969000             | minikube                  | jenkins | v1.26.0 | 12 Aug 24 03:32 PDT | 12 Aug 24 03:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-969000             | running-upgrade-969000    | jenkins | v1.33.1 | 12 Aug 24 03:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-736000             | cert-expiration-736000    | jenkins | v1.33.1 | 12 Aug 24 03:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-736000             | cert-expiration-736000    | jenkins | v1.33.1 | 12 Aug 24 03:35 PDT | 12 Aug 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-917000          | kubernetes-upgrade-917000 | jenkins | v1.33.1 | 12 Aug 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-917000          | kubernetes-upgrade-917000 | jenkins | v1.33.1 | 12 Aug 24 03:35 PDT | 12 Aug 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-917000          | kubernetes-upgrade-917000 | jenkins | v1.33.1 | 12 Aug 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-917000          | kubernetes-upgrade-917000 | jenkins | v1.33.1 | 12 Aug 24 03:36 PDT | 12 Aug 24 03:36 PDT |
	| start   | -p stopped-upgrade-743000             | minikube                  | jenkins | v1.26.0 | 12 Aug 24 03:36 PDT | 12 Aug 24 03:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-743000 stop           | minikube                  | jenkins | v1.26.0 | 12 Aug 24 03:36 PDT | 12 Aug 24 03:36 PDT |
	| start   | -p stopped-upgrade-743000             | stopped-upgrade-743000    | jenkins | v1.33.1 | 12 Aug 24 03:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 03:36:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 03:36:56.080084    9066 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:36:56.080322    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:36:56.080326    9066 out.go:304] Setting ErrFile to fd 2...
	I0812 03:36:56.080329    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:36:56.080476    9066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:36:56.081892    9066 out.go:298] Setting JSON to false
	I0812 03:36:56.101704    9066 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5786,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:36:56.101787    9066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:36:56.106855    9066 out.go:177] * [stopped-upgrade-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:36:56.114874    9066 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:36:56.114940    9066 notify.go:220] Checking for updates...
	I0812 03:36:56.120820    9066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:36:56.123863    9066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:36:56.126878    9066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:36:56.129825    9066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:36:56.132865    9066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:36:56.136216    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:36:56.139762    9066 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 03:36:56.142867    9066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:36:56.146793    9066 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:36:56.153834    9066 start.go:297] selected driver: qemu2
	I0812 03:36:56.153841    9066 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:36:56.153907    9066 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:36:56.156753    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:36:56.156771    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:36:56.156808    9066 start.go:340] cluster config:
	{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:36:56.156859    9066 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:36:56.163819    9066 out.go:177] * Starting "stopped-upgrade-743000" primary control-plane node in "stopped-upgrade-743000" cluster
	I0812 03:36:56.167774    9066 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:36:56.167806    9066 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0812 03:36:56.167814    9066 cache.go:56] Caching tarball of preloaded images
	I0812 03:36:56.167885    9066 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:36:56.167891    9066 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0812 03:36:56.167942    9066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0812 03:36:56.168346    9066 start.go:360] acquireMachinesLock for stopped-upgrade-743000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:36:56.168395    9066 start.go:364] duration metric: took 41.584µs to acquireMachinesLock for "stopped-upgrade-743000"
	I0812 03:36:56.168412    9066 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:36:56.168419    9066 fix.go:54] fixHost starting: 
	I0812 03:36:56.168545    9066 fix.go:112] recreateIfNeeded on stopped-upgrade-743000: state=Stopped err=<nil>
	W0812 03:36:56.168554    9066 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:36:56.176871    9066 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-743000" ...
	I0812 03:36:56.062396    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:36:56.180882    9066 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:36:56.180965    9066 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51428-:22,hostfwd=tcp::51429-:2376,hostname=stopped-upgrade-743000 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/disk.qcow2
	I0812 03:36:56.230302    9066 main.go:141] libmachine: STDOUT: 
	I0812 03:36:56.230332    9066 main.go:141] libmachine: STDERR: 
	I0812 03:36:56.230338    9066 main.go:141] libmachine: Waiting for VM to start (ssh -p 51428 docker@127.0.0.1)...
	I0812 03:37:01.064607    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:01.064855    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:01.099892    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:01.099992    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:01.122462    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:01.122528    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:01.137578    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:01.137648    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:01.148797    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:01.148861    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:01.159009    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:01.159073    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:01.170377    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:01.170446    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:01.180646    8914 logs.go:276] 0 containers: []
	W0812 03:37:01.180655    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:01.180724    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:01.190573    8914 logs.go:276] 0 containers: []
	W0812 03:37:01.190584    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:01.190592    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:01.190599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:01.204290    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:01.204301    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:01.218369    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:01.218383    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:01.231408    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:01.231420    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:01.235987    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:01.235996    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:01.260992    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:01.261003    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:01.273334    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:01.273345    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:01.284659    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:01.284671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:01.320693    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:01.320703    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:01.334752    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:01.334763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:01.348988    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:01.348998    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:01.360429    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:01.360440    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:01.394369    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:01.394380    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:01.405908    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:01.405920    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:01.430907    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:01.430918    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:03.957775    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:08.960084    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:08.960554    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:09.003022    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:09.003181    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:09.023718    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:09.023837    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:09.039384    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:09.039451    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:09.051800    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:09.051875    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:09.062533    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:09.062602    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:09.073360    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:09.073431    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:09.083751    8914 logs.go:276] 0 containers: []
	W0812 03:37:09.083765    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:09.083825    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:09.096154    8914 logs.go:276] 0 containers: []
	W0812 03:37:09.096166    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:09.096173    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:09.096179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:09.108251    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:09.108266    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:09.122439    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:09.122452    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:09.134289    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:09.134299    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:09.145962    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:09.145974    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:09.150733    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:09.150741    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:09.184314    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:09.184326    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:09.197307    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:09.197324    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:09.211888    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:09.211901    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:09.223139    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:09.223154    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:09.241071    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:09.241087    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:09.279103    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:09.279110    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:09.302912    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:09.302922    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:09.323776    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:09.323786    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:09.348653    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:09.348665    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:11.875006    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:16.877213    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:16.877354    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:16.888574    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:16.888646    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:16.904022    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:16.904089    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:16.915664    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:16.915729    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:16.926403    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:16.926470    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:16.944759    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:16.944825    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:16.955779    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:16.955846    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:16.965993    8914 logs.go:276] 0 containers: []
	W0812 03:37:16.966004    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:16.966060    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:16.977338    8914 logs.go:276] 0 containers: []
	W0812 03:37:16.977352    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:16.977361    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:16.977368    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:16.993897    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:16.993910    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:17.008409    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:17.008424    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:17.027405    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:17.027418    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:17.054197    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:17.054218    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:17.093007    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:17.093021    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:17.107935    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:17.107948    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:17.127757    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:17.127775    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:17.141493    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:17.141505    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:17.160146    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:17.160157    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:17.173032    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:17.173045    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:17.213441    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:17.213462    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:17.227520    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:17.227536    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:17.241493    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:17.241505    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:17.266109    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:17.266120    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:16.252232    9066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0812 03:37:16.253109    9066 machine.go:94] provisionDockerMachine start ...
	I0812 03:37:16.253295    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.253816    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.253830    9066 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 03:37:16.352855    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 03:37:16.352889    9066 buildroot.go:166] provisioning hostname "stopped-upgrade-743000"
	I0812 03:37:16.353025    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.353286    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.353301    9066 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-743000 && echo "stopped-upgrade-743000" | sudo tee /etc/hostname
	I0812 03:37:16.445903    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-743000
	
	I0812 03:37:16.446017    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.446232    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.446245    9066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-743000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-743000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-743000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 03:37:16.527569    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 03:37:16.527586    9066 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19409-6342/.minikube CaCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19409-6342/.minikube}
	I0812 03:37:16.527603    9066 buildroot.go:174] setting up certificates
	I0812 03:37:16.527612    9066 provision.go:84] configureAuth start
	I0812 03:37:16.527619    9066 provision.go:143] copyHostCerts
	I0812 03:37:16.527707    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem, removing ...
	I0812 03:37:16.527714    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem
	I0812 03:37:16.527885    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem (1082 bytes)
	I0812 03:37:16.528119    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem, removing ...
	I0812 03:37:16.528125    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem
	I0812 03:37:16.528184    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem (1123 bytes)
	I0812 03:37:16.528307    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem, removing ...
	I0812 03:37:16.528312    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem
	I0812 03:37:16.528373    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem (1675 bytes)
	I0812 03:37:16.528471    9066 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-743000 san=[127.0.0.1 localhost minikube stopped-upgrade-743000]
	I0812 03:37:16.567156    9066 provision.go:177] copyRemoteCerts
	I0812 03:37:16.567185    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 03:37:16.567192    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:16.607603    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 03:37:16.614551    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0812 03:37:16.621808    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 03:37:16.629188    9066 provision.go:87] duration metric: took 101.570083ms to configureAuth
	I0812 03:37:16.629197    9066 buildroot.go:189] setting minikube options for container-runtime
	I0812 03:37:16.629315    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:37:16.629353    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.629442    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.629447    9066 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 03:37:16.700439    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0812 03:37:16.700448    9066 buildroot.go:70] root file system type: tmpfs
	I0812 03:37:16.700499    9066 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 03:37:16.700551    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.700673    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.700708    9066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 03:37:16.775779    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 03:37:16.775833    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.775940    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.775948    9066 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 03:37:17.176367    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0812 03:37:17.176380    9066 machine.go:97] duration metric: took 923.272333ms to provisionDockerMachine
	I0812 03:37:17.176387    9066 start.go:293] postStartSetup for "stopped-upgrade-743000" (driver="qemu2")
	I0812 03:37:17.176394    9066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 03:37:17.176452    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 03:37:17.176464    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:17.217623    9066 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 03:37:17.219445    9066 info.go:137] Remote host: Buildroot 2021.02.12
	I0812 03:37:17.219459    9066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/addons for local assets ...
	I0812 03:37:17.219570    9066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/files for local assets ...
	I0812 03:37:17.219698    9066 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem -> 68412.pem in /etc/ssl/certs
	I0812 03:37:17.219831    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 03:37:17.224799    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:37:17.233485    9066 start.go:296] duration metric: took 57.09025ms for postStartSetup
	I0812 03:37:17.233506    9066 fix.go:56] duration metric: took 21.065379875s for fixHost
	I0812 03:37:17.233578    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:17.233702    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:17.233707    9066 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 03:37:17.311135    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459037.311841504
	
	I0812 03:37:17.311145    9066 fix.go:216] guest clock: 1723459037.311841504
	I0812 03:37:17.311150    9066 fix.go:229] Guest: 2024-08-12 03:37:17.311841504 -0700 PDT Remote: 2024-08-12 03:37:17.233509 -0700 PDT m=+21.185372084 (delta=78.332504ms)
	I0812 03:37:17.311165    9066 fix.go:200] guest clock delta is within tolerance: 78.332504ms
	I0812 03:37:17.311167    9066 start.go:83] releasing machines lock for "stopped-upgrade-743000", held for 21.143057916s
	I0812 03:37:17.311241    9066 ssh_runner.go:195] Run: cat /version.json
	I0812 03:37:17.311252    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:17.311242    9066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 03:37:17.311359    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	W0812 03:37:17.311796    9066 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51428: connect: connection refused
	I0812 03:37:17.311811    9066 retry.go:31] will retry after 141.237118ms: dial tcp [::1]:51428: connect: connection refused
	W0812 03:37:17.348242    9066 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0812 03:37:17.348292    9066 ssh_runner.go:195] Run: systemctl --version
	I0812 03:37:17.350068    9066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 03:37:17.351800    9066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 03:37:17.351829    9066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0812 03:37:17.354755    9066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0812 03:37:17.359257    9066 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 03:37:17.359268    9066 start.go:495] detecting cgroup driver to use...
	I0812 03:37:17.359349    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:37:17.366585    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0812 03:37:17.370157    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0812 03:37:17.373649    9066 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0812 03:37:17.373688    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0812 03:37:17.376916    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:37:17.379684    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0812 03:37:17.382726    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:37:17.386306    9066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 03:37:17.389686    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0812 03:37:17.392704    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0812 03:37:17.395606    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0812 03:37:17.398818    9066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 03:37:17.401877    9066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 03:37:17.404404    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:17.481341    9066 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0812 03:37:17.489206    9066 start.go:495] detecting cgroup driver to use...
	I0812 03:37:17.489276    9066 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0812 03:37:17.494441    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:37:17.500354    9066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 03:37:17.545250    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:37:17.550059    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 03:37:17.554738    9066 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0812 03:37:17.618953    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 03:37:17.624663    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:37:17.630434    9066 ssh_runner.go:195] Run: which cri-dockerd
	I0812 03:37:17.631641    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0812 03:37:17.634297    9066 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0812 03:37:17.639290    9066 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0812 03:37:17.717036    9066 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0812 03:37:17.789158    9066 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0812 03:37:17.789225    9066 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0812 03:37:17.794666    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:17.872511    9066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:37:19.052053    9066 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.179526834s)
	I0812 03:37:19.052109    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0812 03:37:19.056815    9066 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0812 03:37:19.062998    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:37:19.067655    9066 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0812 03:37:19.146721    9066 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0812 03:37:19.229146    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:19.312707    9066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0812 03:37:19.318596    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:37:19.323510    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:19.380827    9066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0812 03:37:19.420408    9066 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0812 03:37:19.420482    9066 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0812 03:37:19.422440    9066 start.go:563] Will wait 60s for crictl version
	I0812 03:37:19.422487    9066 ssh_runner.go:195] Run: which crictl
	I0812 03:37:19.424154    9066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 03:37:19.438401    9066 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0812 03:37:19.438463    9066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:37:19.454325    9066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:37:19.478672    9066 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0812 03:37:19.478736    9066 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0812 03:37:19.480139    9066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 03:37:19.483605    9066 kubeadm.go:883] updating cluster {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0812 03:37:19.483664    9066 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:37:19.483707    9066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:37:19.494327    9066 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:37:19.494335    9066 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:37:19.494375    9066 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:37:19.497899    9066 ssh_runner.go:195] Run: which lz4
	I0812 03:37:19.499230    9066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 03:37:19.500451    9066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 03:37:19.500461    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0812 03:37:20.363040    9066 docker.go:649] duration metric: took 863.849875ms to copy over tarball
	I0812 03:37:20.363099    9066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 03:37:19.770433    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:21.515821    9066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152715167s)
	I0812 03:37:21.515835    9066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 03:37:21.531902    9066 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:37:21.535000    9066 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0812 03:37:21.540172    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:21.626396    9066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:37:23.174634    9066 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548239334s)
	I0812 03:37:23.174723    9066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:37:23.190465    9066 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:37:23.190479    9066 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:37:23.190485    9066 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 03:37:23.194593    9066 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.196176    9066 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.198031    9066 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.198362    9066 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.199686    9066 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.199761    9066 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.201051    9066 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.202383    9066 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.202480    9066 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.202801    9066 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.203694    9066 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0812 03:37:23.203761    9066 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.205108    9066 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.205215    9066 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.205720    9066 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0812 03:37:23.206419    9066 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.645538    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.646955    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.656568    9066 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0812 03:37:23.656590    9066 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.656652    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.657416    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.658391    9066 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0812 03:37:23.658401    9066 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.658424    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0812 03:37:23.662136    9066 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0812 03:37:23.662269    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.675730    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0812 03:37:23.676990    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0812 03:37:23.683650    9066 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0812 03:37:23.683669    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0812 03:37:23.683670    9066 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.683679    9066 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0812 03:37:23.683692    9066 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.683730    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.683730    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.697585    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.699864    9066 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0812 03:37:23.699880    9066 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0812 03:37:23.699909    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0812 03:37:23.707188    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.709618    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0812 03:37:23.709754    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0812 03:37:23.709854    9066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:37:23.720091    9066 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0812 03:37:23.720116    9066 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.720165    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.728728    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0812 03:37:23.728849    9066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0812 03:37:23.729931    9066 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0812 03:37:23.729947    9066 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.729983    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.730018    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0812 03:37:23.730036    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0812 03:37:23.749413    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0812 03:37:23.749457    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0812 03:37:23.749469    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0812 03:37:23.749535    9066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:37:23.761545    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0812 03:37:23.776214    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0812 03:37:23.776243    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0812 03:37:23.781323    9066 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0812 03:37:23.781338    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0812 03:37:23.802880    9066 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0812 03:37:23.802987    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.851785    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0812 03:37:23.851806    9066 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:37:23.851812    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0812 03:37:23.857120    9066 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0812 03:37:23.857145    9066 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.857206    9066 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.950035    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0812 03:37:23.950055    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0812 03:37:23.950179    9066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:37:23.956227    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0812 03:37:23.956258    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0812 03:37:24.024177    9066 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:37:24.024193    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0812 03:37:24.387838    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0812 03:37:24.387865    9066 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:37:24.387873    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0812 03:37:24.539036    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0812 03:37:24.539077    9066 cache_images.go:92] duration metric: took 1.348598042s to LoadCachedImages
	W0812 03:37:24.539117    9066 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0812 03:37:24.539129    9066 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0812 03:37:24.539176    9066 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-743000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 03:37:24.539249    9066 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0812 03:37:24.553153    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:37:24.553169    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:37:24.553174    9066 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 03:37:24.553182    9066 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-743000 NodeName:stopped-upgrade-743000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 03:37:24.553245    9066 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-743000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 03:37:24.553303    9066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0812 03:37:24.556111    9066 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 03:37:24.556141    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 03:37:24.558870    9066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0812 03:37:24.564038    9066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 03:37:24.568715    9066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0812 03:37:24.573964    9066 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0812 03:37:24.575435    9066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 03:37:24.579149    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:24.664524    9066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:37:24.669580    9066 certs.go:68] Setting up /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000 for IP: 10.0.2.15
	I0812 03:37:24.669590    9066 certs.go:194] generating shared ca certs ...
	I0812 03:37:24.669599    9066 certs.go:226] acquiring lock for ca certs: {Name:mk040c6fb5b98a0bc56f55d23979ed8d77242cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.669774    9066 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key
	I0812 03:37:24.669826    9066 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key
	I0812 03:37:24.669831    9066 certs.go:256] generating profile certs ...
	I0812 03:37:24.669920    9066 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key
	I0812 03:37:24.669937    9066 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c
	I0812 03:37:24.669949    9066 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0812 03:37:24.744477    9066 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c ...
	I0812 03:37:24.744489    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c: {Name:mk9f5c2514d0b4bb1c574718ce8d3c9d47233e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.744918    9066 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c ...
	I0812 03:37:24.744925    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c: {Name:mk3f7bf68d0cf30662080a4152ee1bdf57f4967f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.745089    9066 certs.go:381] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt
	I0812 03:37:24.745230    9066 certs.go:385] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key
	I0812 03:37:24.745377    9066 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.key
	I0812 03:37:24.745512    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem (1338 bytes)
	W0812 03:37:24.745540    9066 certs.go:480] ignoring /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841_empty.pem, impossibly tiny 0 bytes
	I0812 03:37:24.745546    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 03:37:24.745573    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem (1082 bytes)
	I0812 03:37:24.745598    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem (1123 bytes)
	I0812 03:37:24.745623    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem (1675 bytes)
	I0812 03:37:24.745676    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:37:24.745999    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 03:37:24.753240    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 03:37:24.760288    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 03:37:24.767253    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 03:37:24.774603    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0812 03:37:24.782490    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 03:37:24.790639    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 03:37:24.798541    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 03:37:24.806325    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem --> /usr/share/ca-certificates/6841.pem (1338 bytes)
	I0812 03:37:24.814047    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /usr/share/ca-certificates/68412.pem (1708 bytes)
	I0812 03:37:24.822082    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 03:37:24.829877    9066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 03:37:24.835841    9066 ssh_runner.go:195] Run: openssl version
	I0812 03:37:24.838127    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68412.pem && ln -fs /usr/share/ca-certificates/68412.pem /etc/ssl/certs/68412.pem"
	I0812 03:37:24.841897    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.843543    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:20 /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.843578    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.845673    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68412.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 03:37:24.849326    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 03:37:24.852514    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.854274    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.854317    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.856522    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 03:37:24.859848    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6841.pem && ln -fs /usr/share/ca-certificates/6841.pem /etc/ssl/certs/6841.pem"
	I0812 03:37:24.863159    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.864777    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:20 /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.864808    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.866844    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6841.pem /etc/ssl/certs/51391683.0"
	I0812 03:37:24.870671    9066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 03:37:24.872357    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 03:37:24.874701    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 03:37:24.877041    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 03:37:24.879377    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 03:37:24.881763    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 03:37:24.883999    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 03:37:24.885985    9066 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:37:24.886065    9066 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:37:24.900919    9066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 03:37:24.904221    9066 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 03:37:24.904228    9066 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 03:37:24.904265    9066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 03:37:24.907314    9066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:37:24.907640    9066 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-743000" does not appear in /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:37:24.907739    9066 kubeconfig.go:62] /Users/jenkins/minikube-integration/19409-6342/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-743000" cluster setting kubeconfig missing "stopped-upgrade-743000" context setting]
	I0812 03:37:24.907960    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.908425    9066 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038744e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:37:24.908758    9066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 03:37:24.911956    9066 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-743000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0812 03:37:24.911964    9066 kubeadm.go:1160] stopping kube-system containers ...
	I0812 03:37:24.912031    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:37:24.923439    9066 docker.go:483] Stopping containers: [93391c2226c7 a41e64288824 9306bfbeb4d2 56d45e7374fb 18fa8e4baf80 126b1845793f 07ab03f2f278 2d03e258149f]
	I0812 03:37:24.923513    9066 ssh_runner.go:195] Run: docker stop 93391c2226c7 a41e64288824 9306bfbeb4d2 56d45e7374fb 18fa8e4baf80 126b1845793f 07ab03f2f278 2d03e258149f
	I0812 03:37:24.935740    9066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 03:37:24.941716    9066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:37:24.945017    9066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:37:24.945026    9066 kubeadm.go:157] found existing configuration files:
	
	I0812 03:37:24.945056    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf
	I0812 03:37:24.947622    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:37:24.947666    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:37:24.950820    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf
	I0812 03:37:24.954258    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:37:24.954309    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:37:24.957615    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf
	I0812 03:37:24.960639    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:37:24.960682    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:37:24.963494    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf
	I0812 03:37:24.966293    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:37:24.966319    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:37:24.969892    9066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:37:24.973198    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:24.996343    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.542433    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.673201    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.695564    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.722858    9066 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:37:25.722936    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:24.772379    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:24.772448    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:24.784408    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:24.784467    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:24.804288    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:24.804351    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:24.815955    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:24.816015    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:24.831902    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:24.831972    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:24.843795    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:24.843848    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:24.855254    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:24.855306    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:24.866296    8914 logs.go:276] 0 containers: []
	W0812 03:37:24.866307    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:24.866357    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:24.878773    8914 logs.go:276] 0 containers: []
	W0812 03:37:24.878784    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:24.878792    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:24.878799    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:24.919260    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:24.919274    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:24.945909    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:24.945918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:24.968409    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:24.968419    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:24.984454    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:24.984465    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:25.003253    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:25.003269    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:25.016099    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:25.016110    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:25.056125    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:25.056141    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:25.060737    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:25.060742    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:25.075148    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:25.075163    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:25.089615    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:25.089627    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:25.101755    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:25.101765    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:25.119924    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:25.119939    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:25.134923    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:25.134933    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:25.147182    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:25.147195    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:27.670955    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:26.224999    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:26.724967    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:26.729168    9066 api_server.go:72] duration metric: took 1.006326625s to wait for apiserver process to appear ...
	I0812 03:37:26.729178    9066 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:37:26.729187    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:32.671886    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:32.672117    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:32.699242    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:32.699367    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:32.715631    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:32.715704    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:32.727542    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:32.727617    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:32.738862    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:32.738940    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:32.749790    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:32.749857    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:32.761063    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:32.761135    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:32.771333    8914 logs.go:276] 0 containers: []
	W0812 03:37:32.771345    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:32.771400    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:32.781313    8914 logs.go:276] 0 containers: []
	W0812 03:37:32.781324    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:32.781333    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:32.781339    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:32.817993    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:32.818006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:32.830134    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:32.830145    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:32.845927    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:32.845938    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:32.870317    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:32.870325    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:32.884080    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:32.884092    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:32.902438    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:32.902452    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:32.914095    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:32.914106    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:32.953079    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:32.953092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:32.958041    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:32.958051    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:32.971770    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:32.971781    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:32.996562    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:32.996576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:33.016793    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:33.016804    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:33.031153    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:33.031164    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:33.042976    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:33.042987    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:31.731241    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:31.731274    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:35.556721    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:36.731578    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:36.731672    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:40.559116    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:40.559448    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:37:40.603624    8914 logs.go:276] 2 containers: [014c98333383 a84eef3c085a]
	I0812 03:37:40.603746    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:37:40.620412    8914 logs.go:276] 2 containers: [19daa5a836e8 dbf0e437f9ea]
	I0812 03:37:40.620497    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:37:40.632246    8914 logs.go:276] 1 containers: [42fb9cb7a732]
	I0812 03:37:40.632315    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:37:40.642887    8914 logs.go:276] 2 containers: [12414f6e5bb9 497339253ef3]
	I0812 03:37:40.642952    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:37:40.653453    8914 logs.go:276] 1 containers: [b3cf63e263fe]
	I0812 03:37:40.653524    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:37:40.664988    8914 logs.go:276] 2 containers: [aaa8bcdd506c 533ac025a3aa]
	I0812 03:37:40.665054    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:37:40.675588    8914 logs.go:276] 0 containers: []
	W0812 03:37:40.675600    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:37:40.675660    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:37:40.690446    8914 logs.go:276] 0 containers: []
	W0812 03:37:40.690462    8914 logs.go:278] No container was found matching "storage-provisioner"
	I0812 03:37:40.690469    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:37:40.690473    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:37:40.702963    8914 logs.go:123] Gathering logs for kube-apiserver [a84eef3c085a] ...
	I0812 03:37:40.702974    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84eef3c085a"
	I0812 03:37:40.728642    8914 logs.go:123] Gathering logs for etcd [dbf0e437f9ea] ...
	I0812 03:37:40.728653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf0e437f9ea"
	I0812 03:37:40.747048    8914 logs.go:123] Gathering logs for kube-scheduler [12414f6e5bb9] ...
	I0812 03:37:40.747061    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12414f6e5bb9"
	I0812 03:37:40.759088    8914 logs.go:123] Gathering logs for kube-proxy [b3cf63e263fe] ...
	I0812 03:37:40.759101    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3cf63e263fe"
	I0812 03:37:40.771045    8914 logs.go:123] Gathering logs for kube-scheduler [497339253ef3] ...
	I0812 03:37:40.771058    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497339253ef3"
	I0812 03:37:40.786069    8914 logs.go:123] Gathering logs for kube-controller-manager [533ac025a3aa] ...
	I0812 03:37:40.786084    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533ac025a3aa"
	I0812 03:37:40.798233    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:37:40.798244    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:37:40.822452    8914 logs.go:123] Gathering logs for kube-apiserver [014c98333383] ...
	I0812 03:37:40.822459    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c98333383"
	I0812 03:37:40.836696    8914 logs.go:123] Gathering logs for etcd [19daa5a836e8] ...
	I0812 03:37:40.836708    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19daa5a836e8"
	I0812 03:37:40.853066    8914 logs.go:123] Gathering logs for coredns [42fb9cb7a732] ...
	I0812 03:37:40.853080    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42fb9cb7a732"
	I0812 03:37:40.864096    8914 logs.go:123] Gathering logs for kube-controller-manager [aaa8bcdd506c] ...
	I0812 03:37:40.864108    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa8bcdd506c"
	I0812 03:37:40.887440    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:37:40.887452    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:37:40.926579    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:37:40.926590    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:37:40.931613    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:37:40.931620    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:37:43.469523    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:41.732369    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:41.732403    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:48.471762    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:48.471952    8914 kubeadm.go:597] duration metric: took 4m3.935350292s to restartPrimaryControlPlane
	W0812 03:37:48.472062    8914 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 03:37:48.472113    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0812 03:37:49.431625    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 03:37:49.437144    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:37:49.440265    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:37:49.443306    8914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:37:49.443312    8914 kubeadm.go:157] found existing configuration files:
	
	I0812 03:37:49.443335    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf
	I0812 03:37:49.446274    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:37:49.446298    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:37:49.449307    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf
	I0812 03:37:49.451894    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:37:49.451920    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:37:49.455214    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf
	I0812 03:37:49.458163    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:37:49.458186    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:37:49.461030    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf
	I0812 03:37:49.463595    8914 kubeadm.go:163] "https://control-plane.minikube.internal:51257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51257 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:37:49.463613    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:37:49.466765    8914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 03:37:49.484672    8914 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0812 03:37:49.484717    8914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 03:37:49.533130    8914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 03:37:49.533196    8914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 03:37:49.533248    8914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 03:37:49.581726    8914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 03:37:49.586004    8914 out.go:204]   - Generating certificates and keys ...
	I0812 03:37:49.586035    8914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 03:37:49.586081    8914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 03:37:49.586127    8914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 03:37:49.586165    8914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 03:37:49.586228    8914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 03:37:49.586255    8914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 03:37:49.586312    8914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 03:37:49.586348    8914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 03:37:49.586402    8914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 03:37:49.586436    8914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 03:37:49.586454    8914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 03:37:49.586480    8914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 03:37:49.788183    8914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 03:37:49.984845    8914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 03:37:50.228221    8914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 03:37:50.283704    8914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 03:37:50.313456    8914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 03:37:50.313503    8914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 03:37:50.313524    8914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 03:37:50.408152    8914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 03:37:46.733055    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:46.733079    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:50.412120    8914 out.go:204]   - Booting up control plane ...
	I0812 03:37:50.412166    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 03:37:50.412208    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 03:37:50.412244    8914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 03:37:50.412314    8914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 03:37:50.412401    8914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 03:37:51.734184    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:51.734208    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:55.414882    8914 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002391 seconds
	I0812 03:37:55.414968    8914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 03:37:55.419934    8914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 03:37:55.936193    8914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 03:37:55.936437    8914 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-969000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 03:37:56.442298    8914 kubeadm.go:310] [bootstrap-token] Using token: qvizt4.2y8zyl62kvg199ij
	I0812 03:37:56.448908    8914 out.go:204]   - Configuring RBAC rules ...
	I0812 03:37:56.448974    8914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 03:37:56.449034    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 03:37:56.450704    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 03:37:56.452216    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 03:37:56.453202    8914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 03:37:56.453957    8914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 03:37:56.457058    8914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 03:37:56.628943    8914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 03:37:56.848272    8914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 03:37:56.848736    8914 kubeadm.go:310] 
	I0812 03:37:56.848768    8914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 03:37:56.848772    8914 kubeadm.go:310] 
	I0812 03:37:56.848812    8914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 03:37:56.848815    8914 kubeadm.go:310] 
	I0812 03:37:56.848832    8914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 03:37:56.848865    8914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 03:37:56.848894    8914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 03:37:56.848899    8914 kubeadm.go:310] 
	I0812 03:37:56.848924    8914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 03:37:56.848927    8914 kubeadm.go:310] 
	I0812 03:37:56.848960    8914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 03:37:56.848965    8914 kubeadm.go:310] 
	I0812 03:37:56.848988    8914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 03:37:56.849020    8914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 03:37:56.849086    8914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 03:37:56.849090    8914 kubeadm.go:310] 
	I0812 03:37:56.849161    8914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 03:37:56.849199    8914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 03:37:56.849258    8914 kubeadm.go:310] 
	I0812 03:37:56.849296    8914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qvizt4.2y8zyl62kvg199ij \
	I0812 03:37:56.849343    8914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a \
	I0812 03:37:56.849356    8914 kubeadm.go:310] 	--control-plane 
	I0812 03:37:56.849358    8914 kubeadm.go:310] 
	I0812 03:37:56.849429    8914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 03:37:56.849511    8914 kubeadm.go:310] 
	I0812 03:37:56.849601    8914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qvizt4.2y8zyl62kvg199ij \
	I0812 03:37:56.849660    8914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a 
	I0812 03:37:56.849719    8914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 03:37:56.849726    8914 cni.go:84] Creating CNI manager for ""
	I0812 03:37:56.849733    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:37:56.854067    8914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 03:37:56.858066    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 03:37:56.861163    8914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 03:37:56.865930    8914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 03:37:56.865975    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 03:37:56.866004    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-969000 minikube.k8s.io/updated_at=2024_08_12T03_37_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=running-upgrade-969000 minikube.k8s.io/primary=true
	I0812 03:37:56.913914    8914 kubeadm.go:1113] duration metric: took 47.977583ms to wait for elevateKubeSystemPrivileges
	I0812 03:37:56.913919    8914 ops.go:34] apiserver oom_adj: -16
	I0812 03:37:56.913934    8914 kubeadm.go:394] duration metric: took 4m12.395964167s to StartCluster
	I0812 03:37:56.913944    8914 settings.go:142] acquiring lock: {Name:mk405bca217b1764467e7caec79ed71135791229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:56.914113    8914 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:37:56.914506    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:56.914709    8914 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:37:56.914714    8914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 03:37:56.914750    8914 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-969000"
	I0812 03:37:56.914756    8914 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-969000"
	I0812 03:37:56.914764    8914 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-969000"
	W0812 03:37:56.914768    8914 addons.go:243] addon storage-provisioner should already be in state true
	I0812 03:37:56.914782    8914 host.go:66] Checking if "running-upgrade-969000" exists ...
	I0812 03:37:56.914812    8914 config.go:182] Loaded profile config "running-upgrade-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:37:56.914847    8914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-969000"
	I0812 03:37:56.915684    8914 kapi.go:59] client config for running-upgrade-969000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/running-upgrade-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a04e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:37:56.915800    8914 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-969000"
	W0812 03:37:56.915804    8914 addons.go:243] addon default-storageclass should already be in state true
	I0812 03:37:56.915817    8914 host.go:66] Checking if "running-upgrade-969000" exists ...
	I0812 03:37:56.919191    8914 out.go:177] * Verifying Kubernetes components...
	I0812 03:37:56.919562    8914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 03:37:56.923239    8914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 03:37:56.923245    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:37:56.927027    8914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:56.930100    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:56.934077    8914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:37:56.934084    8914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 03:37:56.934090    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/running-upgrade-969000/id_rsa Username:docker}
	I0812 03:37:57.005512    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:37:57.010455    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:37:57.010503    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:57.014543    8914 api_server.go:72] duration metric: took 99.822542ms to wait for apiserver process to appear ...
	I0812 03:37:57.014551    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:37:57.014558    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:57.042162    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 03:37:57.051930    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:37:56.735150    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:56.735176    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:02.016673    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:02.016715    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:01.736429    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:01.736485    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:07.017419    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:07.017473    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:06.737226    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:06.737248    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:12.017889    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:12.017943    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:11.739113    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:11.739134    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:17.018509    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:17.018558    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:16.740647    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:16.740691    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:22.019309    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:22.019342    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:21.742882    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:21.742930    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:27.020234    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:27.020260    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0812 03:38:27.380650    8914 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0812 03:38:27.385069    8914 out.go:177] * Enabled addons: storage-provisioner
	I0812 03:38:27.392954    8914 addons.go:510] duration metric: took 30.478657166s for enable addons: enabled=[storage-provisioner]
	I0812 03:38:26.745115    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:26.745236    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:26.757357    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:26.757428    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:26.769114    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:26.769178    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:26.780624    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:26.780692    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:26.790835    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:26.790909    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:26.800876    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:26.800936    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:26.811453    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:26.811515    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:26.821879    9066 logs.go:276] 0 containers: []
	W0812 03:38:26.821894    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:26.821956    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:26.837172    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:26.837191    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:26.837196    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:26.851525    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:26.851535    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:26.863193    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:26.863208    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:26.900732    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:26.900740    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:26.914435    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:26.914447    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:26.926305    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:26.926316    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:26.947763    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:26.947773    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:27.042962    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:27.042973    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:27.054637    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:27.054654    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:27.067488    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:27.067500    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:27.079086    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:27.079097    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:27.105717    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:27.105727    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:27.121736    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:27.121748    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:27.133336    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:27.133346    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:27.137516    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:27.137524    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:27.165741    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:27.165753    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:27.179693    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:27.179704    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:29.695866    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:32.021428    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:32.021486    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:34.698178    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:34.698326    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:34.712122    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:34.712208    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:34.724095    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:34.724164    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:34.735536    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:34.735604    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:34.746047    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:34.746118    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:34.756640    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:34.756714    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:34.768976    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:34.769045    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:34.782588    9066 logs.go:276] 0 containers: []
	W0812 03:38:34.782600    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:34.782657    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:34.793181    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:34.793206    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:34.793212    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:34.797378    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:34.797387    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:34.823534    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:34.823548    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:34.861916    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:34.861928    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:34.876471    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:34.876482    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:34.888070    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:34.888082    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:34.900179    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:34.900193    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:34.914055    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:34.914066    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:34.928289    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:34.928300    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:34.939406    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:34.939418    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:34.951054    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:34.951065    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:34.966252    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:34.966263    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:34.984123    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:34.984134    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:35.019458    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:35.019469    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:35.031898    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:35.031910    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:35.043521    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:35.043532    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:35.070171    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:35.070183    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:37.022911    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:37.022973    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:37.587218    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:42.024947    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:42.024989    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:42.589818    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:42.590061    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:42.623606    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:42.623734    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:42.640768    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:42.640862    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:42.653731    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:42.653805    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:42.666092    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:42.666170    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:42.676641    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:42.676707    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:42.688744    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:42.688818    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:42.698920    9066 logs.go:276] 0 containers: []
	W0812 03:38:42.698932    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:42.698988    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:42.709487    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:42.709506    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:42.709512    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:42.747857    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:42.747874    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:42.762084    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:42.762095    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:42.782825    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:42.782840    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:42.795311    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:42.795322    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:42.830889    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:42.830904    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:42.848594    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:42.848604    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:42.863732    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:42.863742    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:42.877005    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:42.877016    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:42.892450    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:42.892469    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:42.896862    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:42.896869    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:42.921975    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:42.921990    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:42.933564    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:42.933579    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:42.945602    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:42.945617    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:42.957446    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:42.957456    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:42.982784    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:42.982792    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:42.998320    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:42.998330    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:45.509742    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:47.026024    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:47.026052    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:50.512096    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:50.512454    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:50.543869    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:50.543997    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:50.562944    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:50.563040    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:50.577176    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:50.577266    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:50.589174    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:50.589252    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:50.600078    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:50.600153    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:50.611577    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:50.611644    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:50.622060    9066 logs.go:276] 0 containers: []
	W0812 03:38:50.622071    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:50.622124    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:50.632767    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:50.632784    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:50.632790    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:50.645670    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:50.645685    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:50.650397    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:50.650405    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:50.684802    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:50.684813    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:50.699069    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:50.699080    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:50.711011    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:50.711021    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:50.724310    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:50.724322    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:50.736679    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:50.736689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:50.754901    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:50.754913    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:50.788272    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:50.788283    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:50.802556    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:50.802567    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:50.813722    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:50.813736    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:50.825356    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:50.825365    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:50.840851    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:50.840866    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:50.880168    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:50.880177    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:50.897278    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:50.897292    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:50.922890    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:50.922901    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:52.027223    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:52.027270    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:53.442409    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:57.027770    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:57.027926    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:57.040430    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:38:57.040505    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:57.051810    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:38:57.051874    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:57.062739    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:38:57.062815    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:57.073272    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:38:57.073341    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:57.085044    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:38:57.085125    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:57.095906    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:38:57.095967    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:57.106857    8914 logs.go:276] 0 containers: []
	W0812 03:38:57.106867    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:57.106919    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:57.117168    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:38:57.117184    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:38:57.117191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:38:57.131311    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:38:57.131324    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:38:57.143682    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:38:57.143695    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:38:57.155907    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:38:57.155919    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:38:57.168384    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:38:57.168397    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:38:57.185634    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:57.185645    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:57.223568    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:57.223574    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:57.227974    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:57.227983    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:57.264240    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:38:57.264255    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:38:57.278107    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:38:57.278119    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:38:57.289759    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:38:57.289775    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:38:57.304901    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:57.304912    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:57.328641    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:38:57.328650    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:58.444898    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:58.445117    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:58.471958    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:58.472086    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:58.489818    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:58.489906    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:58.503627    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:58.503700    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:58.515827    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:58.515894    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:58.526137    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:58.526200    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:58.536757    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:58.536822    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:58.547278    9066 logs.go:276] 0 containers: []
	W0812 03:38:58.547289    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:58.547338    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:58.557662    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:58.557684    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:58.557689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:58.569701    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:58.569713    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:58.595661    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:58.595669    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:58.599926    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:58.599935    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:58.614307    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:58.614319    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:58.640533    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:58.640543    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:58.651809    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:58.651820    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:58.663543    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:58.663555    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:58.675395    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:58.675406    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:58.686360    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:58.686370    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:58.701648    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:58.701658    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:58.736348    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:58.736358    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:58.751680    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:58.751689    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:58.764349    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:58.764360    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:58.803674    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:58.803687    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:58.817482    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:58.817493    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:58.832236    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:58.832247    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:59.842187    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:01.355766    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:04.844588    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:04.844750    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:04.858386    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:04.858464    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:04.871278    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:04.871359    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:04.883036    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:04.883102    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:04.893772    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:04.893843    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:04.904433    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:04.904498    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:04.914831    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:04.914907    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:04.925025    8914 logs.go:276] 0 containers: []
	W0812 03:39:04.925039    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:04.925096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:04.935646    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:04.935663    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:04.935669    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:04.950090    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:04.950100    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:04.961511    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:04.961525    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:04.997705    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:04.997721    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:05.002353    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:05.002360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:05.037296    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:05.037309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:05.051890    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:05.051900    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:05.063838    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:05.063849    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:05.076035    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:05.076047    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:05.101073    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:05.101081    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:05.116210    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:05.116222    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:05.127658    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:05.127668    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:05.144997    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:05.145010    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:07.658758    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:06.358110    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:06.358345    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:06.383498    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:06.383620    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:06.400467    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:06.400559    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:06.414355    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:06.414420    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:06.425581    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:06.425649    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:06.435980    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:06.436056    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:06.446791    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:06.446863    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:06.457137    9066 logs.go:276] 0 containers: []
	W0812 03:39:06.457148    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:06.457201    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:06.468092    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:06.468112    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:06.468118    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:06.504738    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:06.504748    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:06.509465    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:06.509471    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:06.534658    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:06.534666    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:06.562585    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:06.562596    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:06.581200    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:06.581211    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:06.592623    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:06.592633    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:06.606966    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:06.606979    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:06.621488    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:06.621498    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:06.632756    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:06.632767    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:06.644107    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:06.644118    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:06.655866    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:06.655877    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:06.673636    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:06.673651    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:06.690878    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:06.690888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:06.702477    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:06.702492    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:06.736807    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:06.736817    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:06.750909    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:06.750922    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:09.266976    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:12.661048    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:12.661281    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:12.683826    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:12.683914    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:12.698170    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:12.698247    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:12.709138    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:12.709204    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:12.721420    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:12.721494    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:12.731900    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:12.731961    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:12.742724    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:12.742781    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:12.753068    8914 logs.go:276] 0 containers: []
	W0812 03:39:12.753080    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:12.753143    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:12.763652    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:12.763670    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:12.763675    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:12.777427    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:12.777440    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:12.789381    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:12.789392    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:12.806694    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:12.806706    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:12.818103    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:12.818115    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:12.841650    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:12.841658    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:12.880766    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:12.880778    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:12.886028    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:12.886035    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:12.900627    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:12.900637    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:12.914617    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:12.914629    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:12.926354    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:12.926364    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:12.961453    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:12.961466    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:12.974262    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:12.974273    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:14.269223    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:14.269594    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:14.312789    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:14.312902    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:14.333893    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:14.333972    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:14.345434    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:14.345510    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:14.356860    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:14.356932    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:14.368470    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:14.368543    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:14.379526    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:14.379591    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:14.390080    9066 logs.go:276] 0 containers: []
	W0812 03:39:14.390092    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:14.390154    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:14.400530    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:14.400548    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:14.400553    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:14.413256    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:14.413266    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:14.427570    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:14.427584    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:14.443440    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:14.443451    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:14.455488    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:14.455498    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:14.468972    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:14.468986    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:14.505229    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:14.505240    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:14.530433    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:14.530448    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:14.544597    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:14.544607    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:14.556679    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:14.556690    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:14.580856    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:14.580865    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:14.593872    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:14.593888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:14.631785    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:14.631795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:14.647164    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:14.647174    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:14.664245    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:14.664259    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:14.678722    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:14.678732    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:14.691341    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:14.691356    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:15.491056    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:17.203940    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:20.493814    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:20.494020    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:20.515782    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:20.515879    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:20.531488    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:20.531573    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:20.544260    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:20.544324    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:20.557022    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:20.557087    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:20.567805    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:20.567870    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:20.578311    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:20.578371    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:20.588454    8914 logs.go:276] 0 containers: []
	W0812 03:39:20.588465    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:20.588518    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:20.599413    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:20.599426    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:20.599431    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:20.613448    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:20.613460    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:20.625327    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:20.625338    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:20.640529    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:20.640541    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:20.652586    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:20.652598    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:20.664029    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:20.664040    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:20.699989    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:20.700018    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:20.704795    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:20.704801    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:20.723137    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:20.723147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:20.748153    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:20.748173    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:20.761081    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:20.761092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:20.796405    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:20.796417    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:20.808734    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:20.808749    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:23.328867    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:22.206231    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:22.206327    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:22.222028    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:22.222092    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:22.232655    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:22.232722    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:22.242924    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:22.242983    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:22.257715    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:22.257785    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:22.267992    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:22.268055    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:22.283945    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:22.284011    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:22.302128    9066 logs.go:276] 0 containers: []
	W0812 03:39:22.302139    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:22.302200    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:22.312297    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:22.312314    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:22.312320    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:22.329738    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:22.329749    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:22.344160    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:22.344174    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:22.355713    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:22.355724    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:22.390232    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:22.390243    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:22.407403    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:22.407414    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:22.418843    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:22.418855    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:22.430708    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:22.430723    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:22.454302    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:22.454314    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:22.468335    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:22.468345    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:22.480004    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:22.480016    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:22.495408    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:22.495417    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:22.534744    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:22.534755    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:22.546897    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:22.546907    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:22.558725    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:22.558741    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:22.570713    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:22.570729    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:22.595128    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:22.595134    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:25.101628    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:28.331158    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:28.331526    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:28.368942    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:28.369093    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:28.388144    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:28.388238    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:28.403268    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:28.403345    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:28.415989    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:28.416065    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:28.430419    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:28.430489    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:28.441235    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:28.441303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:28.452772    8914 logs.go:276] 0 containers: []
	W0812 03:39:28.452783    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:28.452842    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:28.463793    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:28.463812    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:28.463819    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:28.479351    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:28.479362    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:28.500398    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:28.500409    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:28.523807    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:28.523818    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:28.559299    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:28.559314    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:28.563998    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:28.564006    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:28.579192    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:28.579208    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:28.590740    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:28.590754    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:28.602805    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:28.602820    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:28.639265    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:28.639278    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:28.653818    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:28.653832    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:28.666368    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:28.666380    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:28.684458    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:28.684472    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:30.104057    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:30.104463    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:30.134585    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:30.134726    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:30.153235    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:30.153341    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:30.166959    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:30.167037    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:30.179044    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:30.179111    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:30.189608    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:30.189679    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:30.199996    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:30.200064    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:30.211676    9066 logs.go:276] 0 containers: []
	W0812 03:39:30.211691    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:30.211744    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:30.222089    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:30.222107    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:30.222113    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:30.246541    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:30.246553    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:30.266362    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:30.266374    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:30.280291    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:30.280302    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:30.291887    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:30.291896    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:30.330824    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:30.330832    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:30.344807    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:30.344821    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:30.365805    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:30.365819    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:30.402605    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:30.402621    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:30.420450    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:30.420463    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:30.431847    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:30.431858    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:30.443735    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:30.443745    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:30.454774    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:30.454787    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:30.479240    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:30.479248    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:30.491735    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:30.491750    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:30.496367    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:30.496376    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:30.521755    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:30.521767    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:31.198189    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:33.038363    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:36.198635    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:36.198840    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:36.222171    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:36.222254    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:36.234030    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:36.234096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:36.244486    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:36.244540    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:36.254736    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:36.254805    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:36.264979    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:36.265036    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:36.275915    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:36.275985    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:36.286276    8914 logs.go:276] 0 containers: []
	W0812 03:39:36.286287    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:36.286347    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:36.296489    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:36.296503    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:36.296508    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:36.301179    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:36.301188    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:36.315426    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:36.315436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:36.329444    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:36.329454    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:36.349019    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:36.349031    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:36.363448    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:36.363458    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:36.374752    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:36.374761    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:36.392084    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:36.392092    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:36.417133    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:36.417147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:36.429831    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:36.429843    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:36.470611    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:36.470625    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:36.506185    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:36.506199    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:36.518198    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:36.518209    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:39.032251    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:38.040840    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:38.041134    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:38.075945    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:38.076073    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:38.094424    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:38.094522    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:38.112814    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:38.112883    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:38.126093    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:38.126165    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:38.136649    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:38.136715    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:38.147817    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:38.147883    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:38.158983    9066 logs.go:276] 0 containers: []
	W0812 03:39:38.158994    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:38.159048    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:38.170055    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:38.170075    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:38.170081    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:38.184168    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:38.184179    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:38.196074    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:38.196087    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:38.217939    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:38.217950    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:38.229958    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:38.229968    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:38.234031    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:38.234038    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:38.245379    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:38.245390    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:38.264728    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:38.264738    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:38.280370    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:38.280380    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:38.318005    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:38.318014    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:38.342216    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:38.342229    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:38.356226    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:38.356236    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:38.371670    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:38.371683    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:38.382653    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:38.382666    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:38.406684    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:38.406694    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:38.445778    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:38.445793    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:38.461224    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:38.461236    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:40.974426    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:44.033247    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:44.033461    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:44.054487    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:44.054602    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:44.071069    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:44.071145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:44.082934    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:44.083009    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:44.094087    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:44.094139    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:44.105267    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:44.105336    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:44.115842    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:44.115896    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:44.125929    8914 logs.go:276] 0 containers: []
	W0812 03:39:44.125941    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:44.125993    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:44.136587    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:44.136602    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:44.136607    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:44.160817    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:44.160826    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:44.195084    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:44.195098    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:44.209019    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:44.209030    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:44.224342    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:44.224352    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:44.236233    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:44.236245    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:44.247429    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:44.247439    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:44.265430    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:44.265441    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:44.277529    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:44.277539    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:44.288742    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:44.288752    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:44.325632    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:44.325644    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:44.330399    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:44.330405    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:44.345576    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:44.345585    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:45.976802    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:45.976973    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:45.997181    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:45.997257    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:46.011821    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:46.011880    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:46.023216    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:46.023287    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:46.033672    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:46.033745    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:46.044541    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:46.044612    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:46.057527    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:46.057594    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:46.067517    9066 logs.go:276] 0 containers: []
	W0812 03:39:46.067527    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:46.067578    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:46.859756    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:46.077916    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:46.077986    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:46.077992    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:46.092420    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:46.092431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:46.103953    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:46.103966    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:46.116493    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:46.116505    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:46.156222    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:46.156232    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:46.170414    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:46.170431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:46.195589    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:46.195600    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:46.208947    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:46.208959    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:46.220582    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:46.220593    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:46.235016    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:46.235030    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:46.249370    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:46.249380    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:46.261172    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:46.261181    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:46.272530    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:46.272541    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:46.277122    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:46.277128    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:46.312197    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:46.312210    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:46.324135    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:46.324146    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:46.341549    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:46.341565    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:48.867817    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:51.861942    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:51.862143    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:51.884564    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:51.884643    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:51.897959    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:51.898031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:51.908985    8914 logs.go:276] 2 containers: [f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:51.909057    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:51.920286    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:51.920348    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:51.930755    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:51.930815    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:51.941152    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:51.941206    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:51.951848    8914 logs.go:276] 0 containers: []
	W0812 03:39:51.951860    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:51.951917    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:51.962484    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:51.962500    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:51.962506    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:52.000689    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:52.000698    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:52.005838    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:39:52.005844    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:39:52.020211    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:39:52.020221    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:39:52.033420    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:39:52.033431    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:39:52.048006    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:52.048016    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:52.066877    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:52.066887    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:52.088794    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:52.088804    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:52.100394    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:52.100405    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:52.112343    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:52.112353    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:52.150690    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:52.150701    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:52.164742    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:52.164756    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:39:52.177397    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:52.177411    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:53.870049    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:53.870269    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:53.889444    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:53.889532    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:53.903017    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:53.903092    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:53.914475    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:53.914534    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:53.924951    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:53.925019    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:53.935333    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:53.935402    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:53.947576    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:53.947647    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:53.958047    9066 logs.go:276] 0 containers: []
	W0812 03:39:53.958059    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:53.958111    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:53.968682    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:53.968698    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:53.968703    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:53.980047    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:53.980060    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:53.991137    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:53.991148    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:54.015864    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:54.015875    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:54.027149    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:54.027162    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:54.042334    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:54.042346    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:54.061975    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:54.061985    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:54.075785    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:54.075795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:54.099910    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:54.099921    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:54.111757    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:54.111769    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:54.123714    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:54.123725    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:54.136206    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:54.136222    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:54.176001    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:54.176010    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:54.180515    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:54.180522    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:54.215318    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:54.215330    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:54.230600    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:54.230618    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:54.245169    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:54.245185    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:54.704386    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:56.758613    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:59.706642    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:59.706738    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:59.719469    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:39:59.719536    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:59.731067    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:39:59.731137    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:59.744189    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:39:59.744268    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:59.755415    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:39:59.755477    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:59.766562    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:39:59.766626    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:59.778440    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:39:59.778510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:59.793594    8914 logs.go:276] 0 containers: []
	W0812 03:39:59.793605    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:59.793666    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:59.805058    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:39:59.805078    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:39:59.805083    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:39:59.817086    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:59.817098    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:59.840670    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:39:59.840681    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:59.852484    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:39:59.852498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:39:59.863671    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:39:59.863684    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:39:59.880846    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:39:59.880858    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:39:59.902154    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:59.902167    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:59.939795    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:59.939804    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:59.974622    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:39:59.974632    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:39:59.987108    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:39:59.987121    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:39:59.998672    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:39:59.998684    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:00.010228    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:00.010241    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:00.022436    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:00.022450    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:00.037069    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:00.037080    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:00.041628    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:00.041635    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:02.558401    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:01.760109    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:01.760220    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:01.772441    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:01.772516    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:01.783608    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:01.783683    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:01.793678    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:01.793750    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:01.804053    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:01.804119    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:01.814278    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:01.814357    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:01.824473    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:01.824537    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:01.838822    9066 logs.go:276] 0 containers: []
	W0812 03:40:01.838833    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:01.838886    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:01.849698    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:01.849714    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:01.849719    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:01.864043    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:01.864055    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:01.875931    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:01.875942    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:01.887852    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:01.887862    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:01.912290    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:01.912302    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:01.926983    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:01.926994    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:01.931381    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:01.931391    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:01.945495    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:01.945505    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:01.959347    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:01.959360    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:01.970778    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:01.970789    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:01.982279    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:01.982290    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:02.006014    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:02.006024    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:02.017177    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:02.017187    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:02.053802    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:02.053811    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:02.069006    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:02.069017    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:02.091774    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:02.091785    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:02.103692    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:02.103702    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:04.643293    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:07.559289    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:07.559500    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:07.588894    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:07.589022    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:07.607409    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:07.607496    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:07.621833    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:07.621910    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:07.633986    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:07.634061    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:07.644994    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:07.645062    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:07.655967    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:07.656034    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:07.666272    8914 logs.go:276] 0 containers: []
	W0812 03:40:07.666285    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:07.666335    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:07.676909    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:07.676927    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:07.676932    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:07.688251    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:07.688261    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:07.724387    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:07.724397    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:07.736014    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:07.736026    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:07.747299    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:07.747309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:07.762668    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:07.762682    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:07.780464    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:07.780476    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:07.792386    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:07.792397    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:07.829420    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:07.829436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:07.851256    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:07.851267    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:07.868938    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:07.868949    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:07.880504    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:07.880517    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:07.904257    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:07.904266    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:07.908839    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:07.908847    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:07.923797    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:07.923808    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:09.645463    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:09.645568    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:09.660988    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:09.661056    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:09.675671    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:09.675741    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:09.694064    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:09.694130    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:09.704527    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:09.704586    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:09.715251    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:09.715318    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:09.725980    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:09.726042    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:09.738405    9066 logs.go:276] 0 containers: []
	W0812 03:40:09.738416    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:09.738469    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:09.749107    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:09.749124    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:09.749129    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:09.762675    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:09.762686    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:09.774563    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:09.774574    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:09.789209    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:09.789220    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:09.793342    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:09.793351    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:09.829773    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:09.829788    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:09.844593    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:09.844604    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:09.856535    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:09.856545    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:09.874600    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:09.874610    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:09.886029    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:09.886038    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:09.897247    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:09.897256    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:09.908272    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:09.908284    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:09.945387    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:09.945396    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:09.956700    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:09.956713    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:09.980933    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:09.980941    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:09.993190    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:09.993201    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:10.025666    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:10.025681    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:10.437516    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:12.541837    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:15.439855    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:15.440198    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:15.474014    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:15.474145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:15.492252    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:15.492350    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:15.507004    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:15.507088    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:15.519501    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:15.519567    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:15.530122    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:15.530189    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:15.540834    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:15.540906    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:15.552989    8914 logs.go:276] 0 containers: []
	W0812 03:40:15.553003    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:15.553068    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:15.563860    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:15.563878    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:15.563882    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:15.600109    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:15.600118    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:15.615304    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:15.615319    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:15.627473    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:15.627484    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:15.639545    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:15.639557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:15.651990    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:15.652001    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:15.674814    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:15.674825    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:15.689944    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:15.689956    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:15.694388    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:15.694393    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:15.705643    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:15.705657    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:15.741089    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:15.741099    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:15.755611    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:15.755626    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:15.767087    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:15.767101    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:15.778164    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:15.778176    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:15.801751    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:15.801769    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:18.314566    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:17.544492    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:17.544833    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:17.580988    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:17.581125    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:17.601692    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:17.601815    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:17.616720    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:17.616796    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:17.635355    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:17.635430    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:17.646476    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:17.646545    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:17.657163    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:17.657241    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:17.669435    9066 logs.go:276] 0 containers: []
	W0812 03:40:17.669448    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:17.669511    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:17.679944    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:17.679962    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:17.679968    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:17.697598    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:17.697608    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:17.735959    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:17.735972    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:17.749865    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:17.749876    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:17.764000    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:17.764011    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:17.775975    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:17.775988    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:17.787450    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:17.787462    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:17.798729    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:17.798739    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:17.822580    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:17.822587    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:17.834991    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:17.835002    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:17.850258    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:17.850273    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:17.861470    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:17.861482    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:17.865909    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:17.865917    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:17.899324    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:17.899336    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:17.924774    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:17.924784    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:17.939100    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:17.939112    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:17.950836    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:17.950846    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:20.464423    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:23.316912    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:23.317065    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:23.344716    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:23.344839    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:23.359929    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:23.360007    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:23.372382    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:23.372455    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:23.383401    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:23.383466    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:23.394106    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:23.394188    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:23.404316    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:23.404375    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:23.414673    8914 logs.go:276] 0 containers: []
	W0812 03:40:23.414683    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:23.414737    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:23.425599    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:23.425616    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:23.425621    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:23.460959    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:23.460970    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:23.476443    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:23.476456    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:23.500814    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:23.500823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:23.512267    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:23.512277    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:23.527180    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:23.527190    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:23.539067    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:23.539077    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:23.543593    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:23.543600    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:23.580956    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:23.580966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:23.595836    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:23.595849    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:23.610168    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:23.610178    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:23.622123    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:23.622134    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:23.634727    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:23.634738    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:23.646936    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:23.646949    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:23.664952    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:23.664966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:25.467050    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:25.467403    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:25.500276    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:25.500408    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:25.520516    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:25.520616    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:25.534342    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:25.534418    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:25.546564    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:25.546643    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:25.557352    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:25.557422    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:25.568054    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:25.568119    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:25.579897    9066 logs.go:276] 0 containers: []
	W0812 03:40:25.579918    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:25.579981    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:25.594361    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:25.594378    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:25.594383    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:25.599191    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:25.599199    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:25.622241    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:25.622254    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:25.642105    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:25.642116    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:25.654165    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:25.654176    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:25.692828    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:25.692841    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:25.718489    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:25.718500    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:25.734470    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:25.734485    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:25.758247    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:25.758254    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:25.775198    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:25.775210    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:25.810251    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:25.810266    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:25.825445    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:25.825457    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:25.837548    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:25.837564    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:25.849365    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:25.849376    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:25.861185    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:25.861195    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:25.874843    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:25.874854    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:25.888974    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:25.888983    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:26.190294    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:28.405644    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:31.192427    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:31.192620    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:31.210582    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:31.210676    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:31.224763    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:31.224842    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:31.236793    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:31.236862    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:31.252555    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:31.252627    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:31.262915    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:31.262972    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:31.274117    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:31.274180    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:31.284495    8914 logs.go:276] 0 containers: []
	W0812 03:40:31.284506    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:31.284555    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:31.294605    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:31.294621    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:31.294627    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:31.299129    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:31.299136    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:31.317062    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:31.317072    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:31.328816    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:31.328827    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:31.340420    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:31.340434    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:31.366674    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:31.366683    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:31.379107    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:31.379117    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:31.396002    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:31.396012    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:31.407590    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:31.407600    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:31.444555    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:31.444563    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:31.480178    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:31.480189    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:31.492662    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:31.492675    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:31.507049    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:31.507061    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:31.521849    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:31.521860    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:31.533618    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:31.533629    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:34.054236    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:33.408385    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:33.408709    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:33.441544    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:33.441676    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:33.460757    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:33.460857    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:33.479576    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:33.479656    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:33.494115    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:33.494179    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:33.505072    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:33.505140    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:33.521557    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:33.521631    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:33.531625    9066 logs.go:276] 0 containers: []
	W0812 03:40:33.531636    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:33.531689    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:33.542338    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:33.542357    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:33.542363    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:33.555071    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:33.555084    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:33.567610    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:33.567623    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:33.572360    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:33.572371    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:33.598209    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:33.598220    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:33.613064    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:33.613077    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:33.630443    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:33.630457    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:33.648862    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:33.648877    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:33.686352    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:33.686375    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:33.701878    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:33.701904    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:33.715102    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:33.715115    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:33.727020    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:33.727033    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:33.752410    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:33.752424    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:33.786723    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:33.786734    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:33.805965    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:33.805976    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:33.821232    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:33.821242    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:33.832934    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:33.832948    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:39.056493    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:39.056703    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:39.072815    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:39.072900    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:39.085853    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:39.085927    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:39.100635    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:39.100709    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:39.112262    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:39.112318    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:39.122910    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:39.122977    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:39.133836    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:39.133900    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:39.144028    8914 logs.go:276] 0 containers: []
	W0812 03:40:39.144040    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:39.144096    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:39.157788    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:39.157807    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:39.157811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:39.193651    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:39.193665    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:39.206287    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:39.206298    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:39.218462    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:39.218475    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:39.232657    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:39.232671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:39.256393    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:39.256401    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:39.293289    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:39.293304    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:39.297851    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:39.297857    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:39.312879    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:39.312893    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:39.330412    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:39.330422    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:39.341682    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:39.341696    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:39.353418    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:39.353432    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:39.367566    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:39.367578    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:39.379608    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:39.379623    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:39.391058    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:39.391072    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:36.346446    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:41.907999    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:41.348777    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:41.349307    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:41.364427    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:41.364517    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:41.376648    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:41.376715    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:41.389830    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:41.389895    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:41.399993    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:41.400062    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:41.410081    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:41.410150    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:41.420865    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:41.420931    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:41.431006    9066 logs.go:276] 0 containers: []
	W0812 03:40:41.431018    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:41.431072    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:41.441499    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:41.441517    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:41.441523    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:41.458790    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:41.458803    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:41.483480    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:41.483488    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:41.498287    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:41.498298    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:41.514144    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:41.514156    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:41.525997    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:41.526007    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:41.538137    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:41.538147    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:41.549978    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:41.549993    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:41.554485    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:41.554494    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:41.590216    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:41.590228    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:41.604734    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:41.604746    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:41.619396    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:41.619408    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:41.657201    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:41.657215    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:41.682063    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:41.682075    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:41.693908    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:41.693923    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:41.715825    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:41.715840    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:41.734064    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:41.734076    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:44.247995    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:46.910279    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:46.910396    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:46.921750    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:46.921816    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:46.932092    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:46.932157    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:46.942930    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:46.943002    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:46.953093    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:46.953156    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:46.963972    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:46.964031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:46.975132    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:46.975208    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:46.985733    8914 logs.go:276] 0 containers: []
	W0812 03:40:46.985748    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:46.985800    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:46.995822    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:46.995839    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:46.995843    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:47.009989    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:47.010002    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:47.024042    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:47.024052    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:47.036361    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:47.036373    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:47.054563    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:47.054576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:47.066546    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:47.066557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:47.078452    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:47.078462    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:47.116092    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:47.116099    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:47.152332    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:47.152342    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:47.163900    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:47.163909    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:47.188624    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:47.188633    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:47.193464    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:47.193474    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:47.204777    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:47.204792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:47.219905    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:47.219915    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:47.237583    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:47.237594    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:49.250315    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:49.250466    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:49.263458    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:49.263539    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:49.274357    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:49.274429    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:49.285263    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:49.285331    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:49.295566    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:49.295637    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:49.305599    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:49.305667    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:49.317385    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:49.317444    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:49.327617    9066 logs.go:276] 0 containers: []
	W0812 03:40:49.327634    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:49.327690    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:49.340175    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:49.340194    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:49.340200    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:49.351553    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:49.351564    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:49.376043    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:49.376053    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:49.414418    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:49.414431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:49.426517    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:49.426532    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:49.437548    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:49.437558    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:49.454702    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:49.454713    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:49.472107    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:49.472116    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:49.487668    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:49.487678    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:49.499479    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:49.499490    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:49.540179    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:49.540190    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:49.554889    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:49.554901    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:49.579779    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:49.579790    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:49.598884    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:49.598895    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:49.603517    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:49.603524    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:49.627730    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:49.627740    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:49.639237    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:49.639251    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:49.752156    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:52.151487    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:54.754408    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:54.754562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:54.774550    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:40:54.774638    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:54.789064    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:40:54.789139    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:54.807448    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:40:54.807510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:54.817975    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:40:54.818038    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:54.828863    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:40:54.828934    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:54.848313    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:40:54.848429    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:54.864206    8914 logs.go:276] 0 containers: []
	W0812 03:40:54.864220    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:54.864274    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:54.878165    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:40:54.878183    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:40:54.878187    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:40:54.890189    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:40:54.890205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:40:54.901854    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:40:54.901863    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:40:54.913617    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:40:54.913626    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:40:54.930833    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:40:54.930842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:40:54.942910    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:54.942919    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:54.965977    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:40:54.965983    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:54.980103    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:54.980117    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:54.985058    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:40:54.985064    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:40:54.996729    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:40:54.996744    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:40:55.012377    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:55.012386    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:55.050156    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:55.050164    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:55.091208    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:40:55.091224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:40:55.106156    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:40:55.106168    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:40:55.124181    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:40:55.124192    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:40:57.642309    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:57.153733    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:57.153879    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:57.165531    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:57.165607    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:57.176720    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:57.176802    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:57.187449    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:57.187524    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:57.206217    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:57.206287    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:57.217252    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:57.217327    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:57.230276    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:57.230342    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:57.240247    9066 logs.go:276] 0 containers: []
	W0812 03:40:57.240259    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:57.240315    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:57.250727    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:57.250745    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:57.250750    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:57.265769    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:57.265788    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:57.279972    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:57.279983    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:57.291398    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:57.291409    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:57.303178    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:57.303187    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:57.326319    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:57.326327    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:57.330544    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:57.330550    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:57.364784    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:57.364795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:57.379836    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:57.379847    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:57.391802    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:57.391814    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:57.403110    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:57.403121    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:57.415877    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:57.415888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:57.452486    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:57.452494    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:57.469973    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:57.469984    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:57.482128    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:57.482141    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:57.507401    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:57.507411    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:57.518593    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:57.518605    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:00.038315    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:02.644584    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:02.644813    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:02.671842    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:02.671947    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:02.689324    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:02.689413    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:02.702873    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:02.702945    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:02.719422    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:02.719488    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:02.729688    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:02.729759    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:02.740168    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:02.740237    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:02.750421    8914 logs.go:276] 0 containers: []
	W0812 03:41:02.750441    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:02.750511    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:02.761552    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:02.761571    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:02.761576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:02.773810    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:02.773823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:02.785632    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:02.785645    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:02.810428    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:02.810436    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:02.824584    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:02.824599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:02.836354    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:02.836370    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:02.852693    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:02.852702    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:02.864183    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:02.864196    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:02.903560    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:02.903576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:02.926638    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:02.926653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:02.938177    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:02.938186    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:02.950887    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:02.950901    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:02.955524    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:02.955530    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:02.971549    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:02.971561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:02.989673    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:02.989684    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:05.039038    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:05.039199    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:05.053115    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:05.053201    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:05.064215    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:05.064281    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:05.075388    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:05.075460    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:05.088861    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:05.088931    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:05.099073    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:05.099141    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:05.110918    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:05.110993    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:05.121460    9066 logs.go:276] 0 containers: []
	W0812 03:41:05.121472    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:05.121531    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:05.132268    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:05.132286    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:05.132291    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:05.145752    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:05.145761    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:05.160421    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:05.160433    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:05.172598    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:05.172614    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:05.188856    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:05.188866    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:05.200254    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:05.200266    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:05.204340    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:05.204348    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:05.240415    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:05.240429    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:05.252167    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:05.252178    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:05.275544    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:05.275555    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:05.287942    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:05.287955    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:05.299280    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:05.299289    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:05.314441    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:05.314451    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:05.328476    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:05.328489    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:05.340001    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:05.340012    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:05.357411    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:05.357424    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:05.395520    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:05.395530    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:05.528573    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:07.923452    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:10.529809    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:10.530031    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:10.553995    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:10.554079    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:10.570106    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:10.570177    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:10.582042    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:10.582105    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:10.592358    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:10.592417    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:10.603424    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:10.603486    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:10.614912    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:10.614982    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:10.631694    8914 logs.go:276] 0 containers: []
	W0812 03:41:10.631705    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:10.631753    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:10.642336    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:10.642355    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:10.642360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:10.679347    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:10.679363    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:10.691749    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:10.691763    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:10.704759    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:10.704772    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:10.722550    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:10.722561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:10.740230    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:10.740244    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:10.751653    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:10.751666    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:10.775739    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:10.775747    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:10.780417    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:10.780423    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:10.791710    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:10.791722    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:10.810238    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:10.810249    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:10.826191    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:10.826202    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:10.861588    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:10.861596    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:10.877815    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:10.877829    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:10.891685    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:10.891698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:13.405979    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:12.926151    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:12.926618    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:12.957725    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:12.957854    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:12.986303    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:12.986391    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:12.998848    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:12.998925    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:13.009911    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:13.009978    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:13.034200    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:13.034274    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:13.060870    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:13.060945    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:13.071262    9066 logs.go:276] 0 containers: []
	W0812 03:41:13.071277    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:13.071335    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:13.082250    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:13.082268    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:13.082274    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:13.107597    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:13.107613    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:13.119335    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:13.119351    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:13.130706    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:13.130719    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:13.142747    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:13.142759    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:13.180220    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:13.180233    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:13.194385    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:13.194398    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:13.206385    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:13.206399    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:13.217951    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:13.217965    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:13.255077    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:13.255086    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:13.259307    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:13.259316    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:13.273306    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:13.273320    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:13.284555    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:13.284566    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:13.301477    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:13.301487    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:13.315685    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:13.315695    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:13.330591    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:13.330602    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:13.342192    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:13.342203    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:15.867480    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:18.408191    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:18.408409    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:18.431864    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:18.431962    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:18.448045    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:18.448133    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:18.460439    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:18.460513    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:18.471882    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:18.471954    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:18.483144    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:18.483212    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:18.493866    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:18.493934    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:18.503651    8914 logs.go:276] 0 containers: []
	W0812 03:41:18.503662    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:18.503718    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:18.514409    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:18.514427    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:18.514432    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:18.519028    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:18.519036    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:18.554946    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:18.554957    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:18.566879    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:18.566889    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:18.581625    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:18.581636    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:18.599459    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:18.599469    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:18.614048    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:18.614060    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:18.628292    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:18.628303    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:18.640563    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:18.640574    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:18.665776    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:18.665785    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:18.677841    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:18.677851    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:18.715131    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:18.715146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:18.727416    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:18.727430    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:18.739380    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:18.739389    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:18.751243    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:18.751258    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:20.870076    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:20.870472    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:20.903179    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:20.903297    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:20.921655    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:20.921735    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:20.935541    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:20.935604    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:20.948020    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:20.948088    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:20.959127    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:20.959194    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:20.977603    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:20.977670    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:20.989198    9066 logs.go:276] 0 containers: []
	W0812 03:41:20.989214    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:20.989266    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:21.006359    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:21.006378    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:21.006382    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:21.026294    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:21.026305    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:21.042744    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:21.042756    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:21.060782    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:21.060791    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:21.264979    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:21.083921    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:21.083929    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:21.088103    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:21.088109    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:21.102246    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:21.102257    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:21.114459    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:21.114470    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:21.126776    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:21.126787    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:21.141229    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:21.141240    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:21.165835    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:21.165846    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:21.188186    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:21.188197    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:21.200036    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:21.200047    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:21.211372    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:21.211383    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:21.226852    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:21.226862    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:21.238676    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:21.238688    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:21.277763    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:21.277771    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:23.822071    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:26.267099    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:26.267270    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:26.281542    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:26.281621    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:26.293125    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:26.293201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:26.304217    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:26.304290    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:26.329861    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:26.329935    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:26.340254    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:26.340317    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:26.350788    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:26.350859    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:26.361023    8914 logs.go:276] 0 containers: []
	W0812 03:41:26.361038    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:26.361101    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:26.374816    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:26.374832    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:26.374837    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:26.386443    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:26.386453    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:26.398597    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:26.398608    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:26.419839    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:26.419850    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:26.442727    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:26.442735    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:26.455987    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:26.455998    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:26.495415    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:26.495423    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:26.509682    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:26.509693    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:26.522047    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:26.522059    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:26.557250    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:26.557264    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:26.573142    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:26.573153    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:26.584637    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:26.584647    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:26.589270    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:26.589279    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:26.603846    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:26.603857    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:26.615894    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:26.615906    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:29.130071    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:28.823989    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:28.824107    9066 kubeadm.go:597] duration metric: took 4m3.92323025s to restartPrimaryControlPlane
	W0812 03:41:28.824188    9066 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 03:41:28.824226    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0812 03:41:29.861596    9066 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037366334s)
	I0812 03:41:29.861673    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 03:41:29.866697    9066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:41:29.869631    9066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:41:29.872518    9066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:41:29.872524    9066 kubeadm.go:157] found existing configuration files:
	
	I0812 03:41:29.872548    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf
	I0812 03:41:29.875136    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:41:29.875158    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:41:29.878104    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf
	I0812 03:41:29.881013    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:41:29.881033    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:41:29.883451    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf
	I0812 03:41:29.886123    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:41:29.886150    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:41:29.889021    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf
	I0812 03:41:29.891393    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:41:29.891412    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:41:29.894246    9066 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 03:41:29.913748    9066 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0812 03:41:29.913781    9066 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 03:41:29.960422    9066 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 03:41:29.960479    9066 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 03:41:29.960532    9066 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 03:41:30.009410    9066 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 03:41:30.013566    9066 out.go:204]   - Generating certificates and keys ...
	I0812 03:41:30.013633    9066 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 03:41:30.013667    9066 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 03:41:30.013717    9066 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 03:41:30.013747    9066 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 03:41:30.013783    9066 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 03:41:30.013809    9066 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 03:41:30.013854    9066 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 03:41:30.013886    9066 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 03:41:30.013928    9066 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 03:41:30.013968    9066 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 03:41:30.013992    9066 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 03:41:30.014025    9066 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 03:41:30.113789    9066 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 03:41:30.158655    9066 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 03:41:30.254072    9066 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 03:41:30.394554    9066 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 03:41:30.423983    9066 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 03:41:30.424351    9066 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 03:41:30.424380    9066 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 03:41:30.513844    9066 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 03:41:30.521335    9066 out.go:204]   - Booting up control plane ...
	I0812 03:41:30.521444    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 03:41:30.521495    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 03:41:30.521532    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 03:41:30.521620    9066 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 03:41:30.521725    9066 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 03:41:34.132259    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:34.132562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:34.166041    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:34.166132    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:34.181766    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:34.181835    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:34.197104    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:34.197176    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:34.212065    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:34.212138    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:34.223167    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:34.223246    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:34.234261    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:34.234326    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:34.244192    8914 logs.go:276] 0 containers: []
	W0812 03:41:34.244204    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:34.244266    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:34.255343    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:34.255362    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:34.255368    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:34.269776    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:34.269787    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:34.284570    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:34.284580    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:34.296093    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:34.296105    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:34.300891    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:34.300898    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:34.334495    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:34.334511    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:34.349232    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:34.349243    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:34.361240    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:34.361250    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:34.372775    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:34.372786    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:34.389906    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:34.389917    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:34.427578    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:34.427591    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:34.440676    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:34.440689    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:34.454150    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:34.454162    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:35.018532    9066 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502011 seconds
	I0812 03:41:35.018778    9066 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 03:41:35.025904    9066 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 03:41:35.535408    9066 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 03:41:35.535521    9066 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-743000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 03:41:36.039516    9066 kubeadm.go:310] [bootstrap-token] Using token: ib1xsa.uqweb83p8pru5fi1
	I0812 03:41:36.042766    9066 out.go:204]   - Configuring RBAC rules ...
	I0812 03:41:36.042819    9066 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 03:41:36.042906    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 03:41:36.047132    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 03:41:36.047942    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 03:41:36.048780    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 03:41:36.049789    9066 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 03:41:36.052922    9066 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 03:41:36.230783    9066 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 03:41:36.443936    9066 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 03:41:36.444333    9066 kubeadm.go:310] 
	I0812 03:41:36.444365    9066 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 03:41:36.444368    9066 kubeadm.go:310] 
	I0812 03:41:36.444407    9066 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 03:41:36.444411    9066 kubeadm.go:310] 
	I0812 03:41:36.444424    9066 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 03:41:36.444461    9066 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 03:41:36.444490    9066 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 03:41:36.444495    9066 kubeadm.go:310] 
	I0812 03:41:36.444521    9066 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 03:41:36.444524    9066 kubeadm.go:310] 
	I0812 03:41:36.444550    9066 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 03:41:36.444553    9066 kubeadm.go:310] 
	I0812 03:41:36.444578    9066 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 03:41:36.444618    9066 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 03:41:36.444660    9066 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 03:41:36.444664    9066 kubeadm.go:310] 
	I0812 03:41:36.444708    9066 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 03:41:36.444751    9066 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 03:41:36.444756    9066 kubeadm.go:310] 
	I0812 03:41:36.444796    9066 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ib1xsa.uqweb83p8pru5fi1 \
	I0812 03:41:36.444853    9066 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a \
	I0812 03:41:36.444863    9066 kubeadm.go:310] 	--control-plane 
	I0812 03:41:36.444868    9066 kubeadm.go:310] 
	I0812 03:41:36.444912    9066 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 03:41:36.444917    9066 kubeadm.go:310] 
	I0812 03:41:36.444957    9066 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ib1xsa.uqweb83p8pru5fi1 \
	I0812 03:41:36.445017    9066 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a 
	I0812 03:41:36.445300    9066 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 03:41:36.445308    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:41:36.445317    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:41:36.449656    9066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 03:41:36.456585    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 03:41:36.460544    9066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 03:41:36.466330    9066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 03:41:36.466385    9066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 03:41:36.466443    9066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-743000 minikube.k8s.io/updated_at=2024_08_12T03_41_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=stopped-upgrade-743000 minikube.k8s.io/primary=true
	I0812 03:41:36.509340    9066 ops.go:34] apiserver oom_adj: -16
	I0812 03:41:36.509348    9066 kubeadm.go:1113] duration metric: took 43.012042ms to wait for elevateKubeSystemPrivileges
	I0812 03:41:36.509358    9066 kubeadm.go:394] duration metric: took 4m11.6268415s to StartCluster
	I0812 03:41:36.509368    9066 settings.go:142] acquiring lock: {Name:mk405bca217b1764467e7caec79ed71135791229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:41:36.509453    9066 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:41:36.509857    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:41:36.510071    9066 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:41:36.510076    9066 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 03:41:36.510115    9066 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-743000"
	I0812 03:41:36.510120    9066 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-743000"
	I0812 03:41:36.510133    9066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-743000"
	I0812 03:41:36.510156    9066 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-743000"
	W0812 03:41:36.510189    9066 addons.go:243] addon storage-provisioner should already be in state true
	I0812 03:41:36.510199    9066 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0812 03:41:36.510162    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:41:36.514442    9066 out.go:177] * Verifying Kubernetes components...
	I0812 03:41:36.515095    9066 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038744e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:41:36.518814    9066 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-743000"
	W0812 03:41:36.518821    9066 addons.go:243] addon default-storageclass should already be in state true
	I0812 03:41:36.518829    9066 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0812 03:41:36.519460    9066 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 03:41:36.519468    9066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 03:41:36.519474    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:41:36.522584    9066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:41:34.466976    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:34.467002    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:34.492291    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:34.492312    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:37.006942    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:36.526606    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:41:36.530624    9066 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:41:36.530630    9066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 03:41:36.530639    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:41:36.626442    9066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:41:36.631839    9066 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:41:36.631886    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:41:36.635663    9066 api_server.go:72] duration metric: took 125.583583ms to wait for apiserver process to appear ...
	I0812 03:41:36.635670    9066 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:41:36.635677    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:36.699965    9066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 03:41:36.723552    9066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:41:42.009098    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:42.009303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:42.026848    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:42.026938    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:42.043146    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:42.043223    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:42.054975    8914 logs.go:276] 4 containers: [9df938e3e4be 8d562f33b5e4 f2d00d5db5b6 08ca3e5de50c]
	I0812 03:41:42.055045    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:42.065341    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:42.065417    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:42.076046    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:42.076107    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:42.087906    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:42.087971    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:42.099081    8914 logs.go:276] 0 containers: []
	W0812 03:41:42.099092    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:42.099152    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:42.109539    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:42.109558    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:42.109564    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:42.147196    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:42.147207    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:42.183986    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:42.184001    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:42.204383    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:42.204395    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:42.208887    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:42.208894    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:42.219828    8914 logs.go:123] Gathering logs for coredns [08ca3e5de50c] ...
	I0812 03:41:42.219842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ca3e5de50c"
	I0812 03:41:42.231785    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:42.231794    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:42.256564    8914 logs.go:123] Gathering logs for coredns [f2d00d5db5b6] ...
	I0812 03:41:42.256575    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d00d5db5b6"
	I0812 03:41:42.268168    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:42.268186    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:42.280141    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:42.280152    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:42.294648    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:42.294658    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:42.312288    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:42.312301    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:42.324261    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:42.324272    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:42.342238    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:42.342256    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:42.354044    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:42.354059    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:41.637740    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:41.637773    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:44.871044    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:46.638013    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:46.638058    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:49.872865    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:49.872955    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:49.885021    8914 logs.go:276] 1 containers: [b9531f8a0da1]
	I0812 03:41:49.885089    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:49.896242    8914 logs.go:276] 1 containers: [b79bf3b01363]
	I0812 03:41:49.896306    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:49.907870    8914 logs.go:276] 4 containers: [ab95ef87686c 2872fddd2cc9 9df938e3e4be 8d562f33b5e4]
	I0812 03:41:49.907939    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:49.925427    8914 logs.go:276] 1 containers: [0d5822d161e7]
	I0812 03:41:49.925493    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:49.937480    8914 logs.go:276] 1 containers: [c841229bc122]
	I0812 03:41:49.937543    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:49.949892    8914 logs.go:276] 1 containers: [082fe6b0babd]
	I0812 03:41:49.949991    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:49.961926    8914 logs.go:276] 0 containers: []
	W0812 03:41:49.961938    8914 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:49.961997    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:49.972935    8914 logs.go:276] 1 containers: [d33ae37b24cf]
	I0812 03:41:49.972954    8914 logs.go:123] Gathering logs for coredns [ab95ef87686c] ...
	I0812 03:41:49.972959    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab95ef87686c"
	I0812 03:41:49.986073    8914 logs.go:123] Gathering logs for kube-scheduler [0d5822d161e7] ...
	I0812 03:41:49.986088    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d5822d161e7"
	I0812 03:41:50.001764    8914 logs.go:123] Gathering logs for container status ...
	I0812 03:41:50.001781    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:50.015299    8914 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:50.015315    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:50.020354    8914 logs.go:123] Gathering logs for kube-proxy [c841229bc122] ...
	I0812 03:41:50.020365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c841229bc122"
	I0812 03:41:50.033749    8914 logs.go:123] Gathering logs for kube-controller-manager [082fe6b0babd] ...
	I0812 03:41:50.033762    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082fe6b0babd"
	I0812 03:41:50.052566    8914 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:50.052581    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:50.092294    8914 logs.go:123] Gathering logs for etcd [b79bf3b01363] ...
	I0812 03:41:50.092313    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b79bf3b01363"
	I0812 03:41:50.107136    8914 logs.go:123] Gathering logs for coredns [2872fddd2cc9] ...
	I0812 03:41:50.107150    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2872fddd2cc9"
	I0812 03:41:50.119668    8914 logs.go:123] Gathering logs for coredns [9df938e3e4be] ...
	I0812 03:41:50.119680    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df938e3e4be"
	I0812 03:41:50.136923    8914 logs.go:123] Gathering logs for coredns [8d562f33b5e4] ...
	I0812 03:41:50.136936    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d562f33b5e4"
	I0812 03:41:50.150282    8914 logs.go:123] Gathering logs for storage-provisioner [d33ae37b24cf] ...
	I0812 03:41:50.150296    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33ae37b24cf"
	I0812 03:41:50.162870    8914 logs.go:123] Gathering logs for kube-apiserver [b9531f8a0da1] ...
	I0812 03:41:50.162882    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9531f8a0da1"
	I0812 03:41:50.178512    8914 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:50.178524    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:50.205656    8914 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:50.205671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:52.748680    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:51.638845    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:51.638871    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:57.750973    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:57.754430    8914 out.go:177] 
	W0812 03:41:57.758419    8914 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0812 03:41:57.758428    8914 out.go:239] * 
	W0812 03:41:57.759094    8914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:41:57.776376    8914 out.go:177] 
	I0812 03:41:56.639355    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:56.639399    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:01.640195    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:01.640221    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:06.641558    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:06.641591    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0812 03:42:07.044729    9066 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0812 03:42:07.052863    9066 out.go:177] * Enabled addons: storage-provisioner
	I0812 03:42:07.060019    9066 addons.go:510] duration metric: took 30.550351334s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-12 10:33:07 UTC, ends at Mon 2024-08-12 10:42:13 UTC. --
	Aug 12 10:41:49 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 12 10:41:54 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 12 10:41:58 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:58Z" level=error msg="ContainerStats resp: {0x40008c5580 linux}"
	Aug 12 10:41:58 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:58Z" level=error msg="ContainerStats resp: {0x40008c59c0 linux}"
	Aug 12 10:41:59 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:59Z" level=error msg="ContainerStats resp: {0x40004cca40 linux}"
	Aug 12 10:41:59 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:41:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x400079f6c0 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x40004cdd40 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x40005f47c0 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x40005f5140 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x40003a2ec0 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x40003a3b00 linux}"
	Aug 12 10:42:00 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:00Z" level=error msg="ContainerStats resp: {0x400098e1c0 linux}"
	Aug 12 10:42:04 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 12 10:42:09 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 12 10:42:10 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:10Z" level=error msg="ContainerStats resp: {0x4000942080 linux}"
	Aug 12 10:42:10 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:10Z" level=error msg="ContainerStats resp: {0x4000942780 linux}"
	Aug 12 10:42:11 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:11Z" level=error msg="ContainerStats resp: {0x40004cc440 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40004cd7c0 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40004cdbc0 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40005f4080 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x400079f640 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40005f4b00 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40005f4c40 linux}"
	Aug 12 10:42:12 running-upgrade-969000 cri-dockerd[3040]: time="2024-08-12T10:42:12Z" level=error msg="ContainerStats resp: {0x40005f48c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ab95ef87686c3       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   89f71ff46b53e
	2872fddd2cc93       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   3d676269956ce
	9df938e3e4bee       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3d676269956ce
	8d562f33b5e40       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   89f71ff46b53e
	c841229bc1227       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   269767a40fc3f
	d33ae37b24cfe       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   1f6d448e4e6ea
	b9531f8a0da1a       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   85b010788535d
	b79bf3b013632       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   2b5e941740a41
	082fe6b0babd5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   29673cf3fcede
	0d5822d161e7b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   4b9c94445a083
	
	
	==> coredns [2872fddd2cc9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:52386->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:56621->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:53836->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:54080->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:43079->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8235367338426774145.305017957197346119. HINFO: read udp 10.244.0.2:46017->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8d562f33b5e4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:39725->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:49984->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:36490->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:55041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:48374->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:38839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:60498->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:58698->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:48864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6719776368677916006.2946527086561827018. HINFO: read udp 10.244.0.3:37490->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9df938e3e4be] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2491622057632049949.2478569183417478062. HINFO: read udp 10.244.0.2:44889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2491622057632049949.2478569183417478062. HINFO: read udp 10.244.0.2:52946->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2491622057632049949.2478569183417478062. HINFO: read udp 10.244.0.2:43381->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2491622057632049949.2478569183417478062. HINFO: read udp 10.244.0.2:58360->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2491622057632049949.2478569183417478062. HINFO: read udp 10.244.0.2:36497->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ab95ef87686c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:59971->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:37883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:33248->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:41779->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:46512->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3697481840461351313.4829854053773924781. HINFO: read udp 10.244.0.3:38906->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-969000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-969000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=running-upgrade-969000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T03_37_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:37:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-969000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:42:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:37:56 +0000   Mon, 12 Aug 2024 10:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:37:56 +0000   Mon, 12 Aug 2024 10:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:37:56 +0000   Mon, 12 Aug 2024 10:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:37:56 +0000   Mon, 12 Aug 2024 10:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-969000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 de679b1faf584297a177f4478f4801ef
	  System UUID:                de679b1faf584297a177f4478f4801ef
	  Boot ID:                    b017997d-2ba5-4ba5-b19a-a49d321d6ce7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6h57w                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-tktc5                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-969000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-969000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-969000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-pwdzb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-969000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m24s)  kubelet          Node running-upgrade-969000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m24s)  kubelet          Node running-upgrade-969000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m24s)  kubelet          Node running-upgrade-969000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-969000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-969000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-969000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-969000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-969000 event: Registered Node running-upgrade-969000 in Controller
	
	
	==> dmesg <==
	[  +1.729594] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.077961] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.084530] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.140942] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.087996] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.079458] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.327544] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.640031] systemd-fstab-generator[1944]: Ignoring "noauto" for root device
	[  +2.278903] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.143268] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.095730] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.089463] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +2.959202] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.213999] systemd-fstab-generator[2995]: Ignoring "noauto" for root device
	[  +0.060703] systemd-fstab-generator[3008]: Ignoring "noauto" for root device
	[  +0.071436] systemd-fstab-generator[3019]: Ignoring "noauto" for root device
	[  +0.091267] systemd-fstab-generator[3033]: Ignoring "noauto" for root device
	[  +2.299255] systemd-fstab-generator[3184]: Ignoring "noauto" for root device
	[  +2.889432] systemd-fstab-generator[3682]: Ignoring "noauto" for root device
	[  +1.367262] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
	[Aug12 10:34] kauditd_printk_skb: 68 callbacks suppressed
	[Aug12 10:37] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.598424] systemd-fstab-generator[11294]: Ignoring "noauto" for root device
	[  +6.134217] systemd-fstab-generator[11914]: Ignoring "noauto" for root device
	[  +0.460754] systemd-fstab-generator[12045]: Ignoring "noauto" for root device
	
	
	==> etcd [b79bf3b01363] <==
	{"level":"info","ts":"2024-08-12T10:37:51.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T10:37:51.976Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-12T10:37:52.943Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-969000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T10:37:52.944Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:37:52.945Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-12T10:37:52.945Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:42:14 up 9 min,  0 users,  load average: 0.22, 0.31, 0.17
	Linux running-upgrade-969000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b9531f8a0da1] <==
	I0812 10:37:54.168249       1 controller.go:611] quota admission added evaluator for: namespaces
	I0812 10:37:54.181605       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 10:37:54.182917       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0812 10:37:54.184953       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0812 10:37:54.184983       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 10:37:54.200258       1 cache.go:39] Caches are synced for autoregister controller
	I0812 10:37:54.200401       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0812 10:37:54.909516       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0812 10:37:55.084851       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0812 10:37:55.087940       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0812 10:37:55.087954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 10:37:55.211056       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 10:37:55.220599       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 10:37:55.272566       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0812 10:37:55.274247       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0812 10:37:55.274619       1 controller.go:611] quota admission added evaluator for: endpoints
	I0812 10:37:55.275832       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 10:37:56.242634       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0812 10:37:56.763172       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0812 10:37:56.770622       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0812 10:37:56.784957       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0812 10:37:56.825796       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 10:38:10.856726       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0812 10:38:10.906055       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0812 10:38:11.623780       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [082fe6b0babd] <==
	I0812 10:38:10.011745       1 shared_informer.go:262] Caches are synced for TTL
	I0812 10:38:10.013709       1 shared_informer.go:262] Caches are synced for persistent volume
	I0812 10:38:10.054099       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0812 10:38:10.054106       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0812 10:38:10.055195       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0812 10:38:10.106130       1 shared_informer.go:262] Caches are synced for daemon sets
	I0812 10:38:10.155197       1 shared_informer.go:262] Caches are synced for taint
	I0812 10:38:10.155238       1 shared_informer.go:262] Caches are synced for attach detach
	I0812 10:38:10.155280       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0812 10:38:10.155346       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-969000. Assuming now as a timestamp.
	I0812 10:38:10.155388       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0812 10:38:10.155522       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0812 10:38:10.155662       1 event.go:294] "Event occurred" object="running-upgrade-969000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-969000 event: Registered Node running-upgrade-969000 in Controller"
	I0812 10:38:10.206339       1 shared_informer.go:262] Caches are synced for resource quota
	I0812 10:38:10.215022       1 shared_informer.go:262] Caches are synced for resource quota
	I0812 10:38:10.245236       1 shared_informer.go:262] Caches are synced for deployment
	I0812 10:38:10.254615       1 shared_informer.go:262] Caches are synced for disruption
	I0812 10:38:10.254636       1 disruption.go:371] Sending events to api server.
	I0812 10:38:10.620972       1 shared_informer.go:262] Caches are synced for garbage collector
	I0812 10:38:10.653316       1 shared_informer.go:262] Caches are synced for garbage collector
	I0812 10:38:10.653325       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0812 10:38:10.859271       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pwdzb"
	I0812 10:38:10.907039       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0812 10:38:11.007374       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tktc5"
	I0812 10:38:11.009898       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6h57w"
	
	
	==> kube-proxy [c841229bc122] <==
	I0812 10:38:11.612884       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0812 10:38:11.612913       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0812 10:38:11.612928       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0812 10:38:11.621767       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0812 10:38:11.621779       1 server_others.go:206] "Using iptables Proxier"
	I0812 10:38:11.621792       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0812 10:38:11.621875       1 server.go:661] "Version info" version="v1.24.1"
	I0812 10:38:11.621880       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:38:11.622091       1 config.go:317] "Starting service config controller"
	I0812 10:38:11.622097       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0812 10:38:11.622105       1 config.go:226] "Starting endpoint slice config controller"
	I0812 10:38:11.622107       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0812 10:38:11.622351       1 config.go:444] "Starting node config controller"
	I0812 10:38:11.622358       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0812 10:38:11.723151       1 shared_informer.go:262] Caches are synced for node config
	I0812 10:38:11.723176       1 shared_informer.go:262] Caches are synced for service config
	I0812 10:38:11.723192       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0d5822d161e7] <==
	W0812 10:37:54.148108       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:37:54.148145       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:37:54.148182       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:37:54.148198       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:37:54.148245       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 10:37:54.148269       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 10:37:54.148301       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:37:54.148316       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 10:37:54.148361       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:37:54.148384       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:37:54.148412       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:37:54.148427       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:37:54.148479       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 10:37:54.148504       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 10:37:54.151146       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 10:37:54.151157       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 10:37:54.966073       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 10:37:54.966103       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 10:37:54.972689       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:37:54.972716       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:37:54.988403       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 10:37:54.988414       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 10:37:55.113057       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:37:55.113152       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0812 10:37:55.342279       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-12 10:33:07 UTC, ends at Mon 2024-08-12 10:42:14 UTC. --
	Aug 12 10:37:57 running-upgrade-969000 kubelet[11920]: I0812 10:37:57.818756   11920 apiserver.go:52] "Watching apiserver"
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: I0812 10:37:58.229762   11920 reconciler.go:157] "Reconciler: start to sync state"
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: E0812 10:37:58.409478   11920 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-969000\" already exists" pod="kube-system/etcd-running-upgrade-969000"
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: E0812 10:37:58.601317   11920 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-969000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-969000"
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: E0812 10:37:58.797621   11920 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-969000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-969000"
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: I0812 10:37:58.995315   11920 request.go:601] Waited for 1.113217012s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 12 10:37:58 running-upgrade-969000 kubelet[11920]: E0812 10:37:58.998185   11920 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-969000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-969000"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.075682   11920 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.076017   11920 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.161215   11920 topology_manager.go:200] "Topology Admit Handler"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.281723   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/00b7f064-8220-47b0-af91-c6586c63baa6-tmp\") pod \"storage-provisioner\" (UID: \"00b7f064-8220-47b0-af91-c6586c63baa6\") " pod="kube-system/storage-provisioner"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.281750   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrn4\" (UniqueName: \"kubernetes.io/projected/00b7f064-8220-47b0-af91-c6586c63baa6-kube-api-access-6wrn4\") pod \"storage-provisioner\" (UID: \"00b7f064-8220-47b0-af91-c6586c63baa6\") " pod="kube-system/storage-provisioner"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.861172   11920 topology_manager.go:200] "Topology Admit Handler"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.986048   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ebfa056-3da2-4169-9339-c7cb386320a9-kube-proxy\") pod \"kube-proxy-pwdzb\" (UID: \"6ebfa056-3da2-4169-9339-c7cb386320a9\") " pod="kube-system/kube-proxy-pwdzb"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.986082   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jwv\" (UniqueName: \"kubernetes.io/projected/6ebfa056-3da2-4169-9339-c7cb386320a9-kube-api-access-z6jwv\") pod \"kube-proxy-pwdzb\" (UID: \"6ebfa056-3da2-4169-9339-c7cb386320a9\") " pod="kube-system/kube-proxy-pwdzb"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.986119   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ebfa056-3da2-4169-9339-c7cb386320a9-xtables-lock\") pod \"kube-proxy-pwdzb\" (UID: \"6ebfa056-3da2-4169-9339-c7cb386320a9\") " pod="kube-system/kube-proxy-pwdzb"
	Aug 12 10:38:10 running-upgrade-969000 kubelet[11920]: I0812 10:38:10.986130   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ebfa056-3da2-4169-9339-c7cb386320a9-lib-modules\") pod \"kube-proxy-pwdzb\" (UID: \"6ebfa056-3da2-4169-9339-c7cb386320a9\") " pod="kube-system/kube-proxy-pwdzb"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.014958   11920 topology_manager.go:200] "Topology Admit Handler"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.015838   11920 topology_manager.go:200] "Topology Admit Handler"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.187120   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf65cac8-6b33-49b3-bba9-0d208b1cbfac-config-volume\") pod \"coredns-6d4b75cb6d-6h57w\" (UID: \"cf65cac8-6b33-49b3-bba9-0d208b1cbfac\") " pod="kube-system/coredns-6d4b75cb6d-6h57w"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.187352   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2drw\" (UniqueName: \"kubernetes.io/projected/3d8badd7-73e0-4be0-a8c6-aced147ae290-kube-api-access-d2drw\") pod \"coredns-6d4b75cb6d-tktc5\" (UID: \"3d8badd7-73e0-4be0-a8c6-aced147ae290\") " pod="kube-system/coredns-6d4b75cb6d-tktc5"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.187375   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d8badd7-73e0-4be0-a8c6-aced147ae290-config-volume\") pod \"coredns-6d4b75cb6d-tktc5\" (UID: \"3d8badd7-73e0-4be0-a8c6-aced147ae290\") " pod="kube-system/coredns-6d4b75cb6d-tktc5"
	Aug 12 10:38:11 running-upgrade-969000 kubelet[11920]: I0812 10:38:11.187391   11920 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8l2\" (UniqueName: \"kubernetes.io/projected/cf65cac8-6b33-49b3-bba9-0d208b1cbfac-kube-api-access-pc8l2\") pod \"coredns-6d4b75cb6d-6h57w\" (UID: \"cf65cac8-6b33-49b3-bba9-0d208b1cbfac\") " pod="kube-system/coredns-6d4b75cb6d-6h57w"
	Aug 12 10:41:49 running-upgrade-969000 kubelet[11920]: I0812 10:41:49.306993   11920 scope.go:110] "RemoveContainer" containerID="08ca3e5de50c47c949fa7dc1d9fe122ba503a55ebbc08e9eec7a98da46df0de7"
	Aug 12 10:41:49 running-upgrade-969000 kubelet[11920]: I0812 10:41:49.321634   11920 scope.go:110] "RemoveContainer" containerID="f2d00d5db5b6729efadb4cccd41f42af0d452c7dc4fec26295b1698a375afabf"
	
	
	==> storage-provisioner [d33ae37b24cf] <==
	I0812 10:38:10.701973       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 10:38:10.709121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 10:38:10.709340       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 10:38:10.712268       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 10:38:10.712366       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-969000_29f368ca-368d-490a-a4d5-a466087a6efd!
	I0812 10:38:10.715374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45074f79-4865-4de6-b8b4-651c42f0c322", APIVersion:"v1", ResourceVersion:"332", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-969000_29f368ca-368d-490a-a4d5-a466087a6efd became leader
	I0812 10:38:10.813218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-969000_29f368ca-368d-490a-a4d5-a466087a6efd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-969000 -n running-upgrade-969000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-969000 -n running-upgrade-969000: exit status 2 (15.6060405s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-969000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-969000
--- FAIL: TestRunningBinaryUpgrade (588.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.101927292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-917000" primary control-plane node in "kubernetes-upgrade-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:35:42.203634    8990 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:35:42.203787    8990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:35:42.203798    8990 out.go:304] Setting ErrFile to fd 2...
	I0812 03:35:42.203801    8990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:35:42.203937    8990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:35:42.205327    8990 out.go:298] Setting JSON to false
	I0812 03:35:42.223183    8990 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5712,"bootTime":1723453230,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:35:42.223333    8990 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:35:42.227980    8990 out.go:177] * [kubernetes-upgrade-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:35:42.234939    8990 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:35:42.235041    8990 notify.go:220] Checking for updates...
	I0812 03:35:42.241884    8990 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:35:42.244919    8990 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:35:42.247923    8990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:35:42.250817    8990 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:35:42.253928    8990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:35:42.257385    8990 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:35:42.257458    8990 config.go:182] Loaded profile config "running-upgrade-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:35:42.257518    8990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:35:42.261899    8990 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:35:42.268869    8990 start.go:297] selected driver: qemu2
	I0812 03:35:42.268878    8990 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:35:42.268886    8990 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:35:42.271289    8990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:35:42.274843    8990 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:35:42.277977    8990 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:35:42.277988    8990 cni.go:84] Creating CNI manager for ""
	I0812 03:35:42.277994    8990 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 03:35:42.278017    8990 start.go:340] cluster config:
	{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:35:42.281523    8990 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:35:42.285811    8990 out.go:177] * Starting "kubernetes-upgrade-917000" primary control-plane node in "kubernetes-upgrade-917000" cluster
	I0812 03:35:42.293895    8990 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:35:42.293931    8990 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:35:42.293949    8990 cache.go:56] Caching tarball of preloaded images
	I0812 03:35:42.294034    8990 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:35:42.294041    8990 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0812 03:35:42.294098    8990 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kubernetes-upgrade-917000/config.json ...
	I0812 03:35:42.294110    8990 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kubernetes-upgrade-917000/config.json: {Name:mk277b30f3c5fc219c4c2aa5b5be7c9f8e5abf8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:35:42.294367    8990 start.go:360] acquireMachinesLock for kubernetes-upgrade-917000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:35:42.294403    8990 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "kubernetes-upgrade-917000"
	I0812 03:35:42.294416    8990 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:35:42.294460    8990 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:35:42.302948    8990 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:35:42.319677    8990 start.go:159] libmachine.API.Create for "kubernetes-upgrade-917000" (driver="qemu2")
	I0812 03:35:42.319718    8990 client.go:168] LocalClient.Create starting
	I0812 03:35:42.319787    8990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:35:42.319827    8990 main.go:141] libmachine: Decoding PEM data...
	I0812 03:35:42.319840    8990 main.go:141] libmachine: Parsing certificate...
	I0812 03:35:42.319876    8990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:35:42.319905    8990 main.go:141] libmachine: Decoding PEM data...
	I0812 03:35:42.319918    8990 main.go:141] libmachine: Parsing certificate...
	I0812 03:35:42.320309    8990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:35:42.496100    8990 main.go:141] libmachine: Creating SSH key...
	I0812 03:35:42.869790    8990 main.go:141] libmachine: Creating Disk image...
	I0812 03:35:42.869803    8990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:35:42.870096    8990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:42.880074    8990 main.go:141] libmachine: STDOUT: 
	I0812 03:35:42.880094    8990 main.go:141] libmachine: STDERR: 
	I0812 03:35:42.880143    8990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2 +20000M
	I0812 03:35:42.888043    8990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:35:42.888058    8990 main.go:141] libmachine: STDERR: 
	I0812 03:35:42.888076    8990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:42.888083    8990 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:35:42.888099    8990 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:35:42.888128    8990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:53:22:64:7f:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:42.889688    8990 main.go:141] libmachine: STDOUT: 
	I0812 03:35:42.889703    8990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:35:42.889722    8990 client.go:171] duration metric: took 570.005584ms to LocalClient.Create
	I0812 03:35:44.891824    8990 start.go:128] duration metric: took 2.597384541s to createHost
	I0812 03:35:44.891867    8990 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 2.597494125s
	W0812 03:35:44.891907    8990 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:35:44.902112    8990 out.go:177] * Deleting "kubernetes-upgrade-917000" in qemu2 ...
	W0812 03:35:44.925194    8990 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:35:44.925207    8990 start.go:729] Will try again in 5 seconds ...
	I0812 03:35:49.927383    8990 start.go:360] acquireMachinesLock for kubernetes-upgrade-917000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:35:49.927774    8990 start.go:364] duration metric: took 291.875µs to acquireMachinesLock for "kubernetes-upgrade-917000"
	I0812 03:35:49.927847    8990 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:35:49.928067    8990 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:35:49.935794    8990 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:35:49.971537    8990 start.go:159] libmachine.API.Create for "kubernetes-upgrade-917000" (driver="qemu2")
	I0812 03:35:49.971581    8990 client.go:168] LocalClient.Create starting
	I0812 03:35:49.971685    8990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:35:49.971749    8990 main.go:141] libmachine: Decoding PEM data...
	I0812 03:35:49.971762    8990 main.go:141] libmachine: Parsing certificate...
	I0812 03:35:49.971825    8990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:35:49.971864    8990 main.go:141] libmachine: Decoding PEM data...
	I0812 03:35:49.971873    8990 main.go:141] libmachine: Parsing certificate...
	I0812 03:35:49.972748    8990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:35:50.133635    8990 main.go:141] libmachine: Creating SSH key...
	I0812 03:35:50.213684    8990 main.go:141] libmachine: Creating Disk image...
	I0812 03:35:50.213694    8990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:35:50.213960    8990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:50.224437    8990 main.go:141] libmachine: STDOUT: 
	I0812 03:35:50.224456    8990 main.go:141] libmachine: STDERR: 
	I0812 03:35:50.224516    8990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2 +20000M
	I0812 03:35:50.233906    8990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:35:50.233927    8990 main.go:141] libmachine: STDERR: 
	I0812 03:35:50.233940    8990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:50.233947    8990 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:35:50.233968    8990 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:35:50.234006    8990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:60:6a:c9:95:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:50.236071    8990 main.go:141] libmachine: STDOUT: 
	I0812 03:35:50.236089    8990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:35:50.236104    8990 client.go:171] duration metric: took 264.519459ms to LocalClient.Create
	I0812 03:35:52.238260    8990 start.go:128] duration metric: took 2.310184917s to createHost
	I0812 03:35:52.238322    8990 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 2.310561833s
	W0812 03:35:52.238567    8990 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:35:52.249983    8990 out.go:177] 
	W0812 03:35:52.253971    8990 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:35:52.253985    8990 out.go:239] * 
	* 
	W0812 03:35:52.255303    8990 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:35:52.265998    8990 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-917000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-917000: (3.560491458s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-917000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-917000 status --format={{.Host}}: exit status 7 (55.374542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180313417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-917000" primary control-plane node in "kubernetes-upgrade-917000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:35:55.922727    9032 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:35:55.922864    9032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:35:55.922870    9032 out.go:304] Setting ErrFile to fd 2...
	I0812 03:35:55.922872    9032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:35:55.923005    9032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:35:55.924082    9032 out.go:298] Setting JSON to false
	I0812 03:35:55.940375    9032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5725,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:35:55.940447    9032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:35:55.945156    9032 out.go:177] * [kubernetes-upgrade-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:35:55.951163    9032 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:35:55.951223    9032 notify.go:220] Checking for updates...
	I0812 03:35:55.957086    9032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:35:55.960137    9032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:35:55.963118    9032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:35:55.966106    9032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:35:55.969180    9032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:35:55.972366    9032 config.go:182] Loaded profile config "kubernetes-upgrade-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0812 03:35:55.972615    9032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:35:55.977050    9032 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:35:55.984055    9032 start.go:297] selected driver: qemu2
	I0812 03:35:55.984060    9032 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:35:55.984105    9032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:35:55.986324    9032 cni.go:84] Creating CNI manager for ""
	I0812 03:35:55.986342    9032 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:35:55.986367    9032 start.go:340] cluster config:
	{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-917000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:35:55.989755    9032 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:35:55.996924    9032 out.go:177] * Starting "kubernetes-upgrade-917000" primary control-plane node in "kubernetes-upgrade-917000" cluster
	I0812 03:35:56.001112    9032 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:35:56.001128    9032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0812 03:35:56.001136    9032 cache.go:56] Caching tarball of preloaded images
	I0812 03:35:56.001195    9032 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:35:56.001200    9032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0812 03:35:56.001266    9032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kubernetes-upgrade-917000/config.json ...
	I0812 03:35:56.001722    9032 start.go:360] acquireMachinesLock for kubernetes-upgrade-917000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:35:56.001748    9032 start.go:364] duration metric: took 20.334µs to acquireMachinesLock for "kubernetes-upgrade-917000"
	I0812 03:35:56.001757    9032 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:35:56.001763    9032 fix.go:54] fixHost starting: 
	I0812 03:35:56.001873    9032 fix.go:112] recreateIfNeeded on kubernetes-upgrade-917000: state=Stopped err=<nil>
	W0812 03:35:56.001881    9032 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:35:56.009073    9032 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	I0812 03:35:56.013081    9032 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:35:56.013114    9032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:60:6a:c9:95:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:35:56.014965    9032 main.go:141] libmachine: STDOUT: 
	I0812 03:35:56.014981    9032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:35:56.015009    9032 fix.go:56] duration metric: took 13.248ms for fixHost
	I0812 03:35:56.015013    9032 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 13.261625ms
	W0812 03:35:56.015019    9032 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:35:56.015044    9032 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:35:56.015048    9032 start.go:729] Will try again in 5 seconds ...
	I0812 03:36:01.017216    9032 start.go:360] acquireMachinesLock for kubernetes-upgrade-917000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:36:01.017815    9032 start.go:364] duration metric: took 483.334µs to acquireMachinesLock for "kubernetes-upgrade-917000"
	I0812 03:36:01.017899    9032 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:36:01.017923    9032 fix.go:54] fixHost starting: 
	I0812 03:36:01.018671    9032 fix.go:112] recreateIfNeeded on kubernetes-upgrade-917000: state=Stopped err=<nil>
	W0812 03:36:01.018697    9032 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:36:01.027311    9032 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	I0812 03:36:01.031308    9032 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:36:01.031583    9032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:60:6a:c9:95:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0812 03:36:01.040969    9032 main.go:141] libmachine: STDOUT: 
	I0812 03:36:01.041026    9032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:36:01.041128    9032 fix.go:56] duration metric: took 23.209292ms for fixHost
	I0812 03:36:01.041145    9032 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 23.306792ms
	W0812 03:36:01.041331    9032 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:36:01.050301    9032 out.go:177] 
	W0812 03:36:01.053461    9032 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:36:01.053514    9032 out.go:239] * 
	* 
	W0812 03:36:01.055429    9032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:36:01.064288    9032 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-917000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-917000 version --output=json: exit status 1 (51.991708ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-917000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-12 03:36:01.127941 -0700 PDT m=+1005.315066085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-917000 -n kubernetes-upgrade-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-917000 -n kubernetes-upgrade-917000: exit status 7 (30.792667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-917000
--- FAIL: TestKubernetesUpgrade (19.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19409
- KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3193325995/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19409
- KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3961301600/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.294370075 start -p stopped-upgrade-743000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.294370075 start -p stopped-upgrade-743000 --memory=2200 --vm-driver=qemu2 : (41.290105458s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.294370075 -p stopped-upgrade-743000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.294370075 -p stopped-upgrade-743000 stop: (12.109949584s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.797703083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-743000" primary control-plane node in "stopped-upgrade-743000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-743000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:36:56.080084    9066 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:36:56.080322    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:36:56.080326    9066 out.go:304] Setting ErrFile to fd 2...
	I0812 03:36:56.080329    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:36:56.080476    9066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:36:56.081892    9066 out.go:298] Setting JSON to false
	I0812 03:36:56.101704    9066 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5786,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:36:56.101787    9066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:36:56.106855    9066 out.go:177] * [stopped-upgrade-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:36:56.114874    9066 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:36:56.114940    9066 notify.go:220] Checking for updates...
	I0812 03:36:56.120820    9066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:36:56.123863    9066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:36:56.126878    9066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:36:56.129825    9066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:36:56.132865    9066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:36:56.136216    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:36:56.139762    9066 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 03:36:56.142867    9066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:36:56.146793    9066 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:36:56.153834    9066 start.go:297] selected driver: qemu2
	I0812 03:36:56.153841    9066 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:36:56.153907    9066 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:36:56.156753    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:36:56.156771    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:36:56.156808    9066 start.go:340] cluster config:
	{Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:36:56.156859    9066 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:36:56.163819    9066 out.go:177] * Starting "stopped-upgrade-743000" primary control-plane node in "stopped-upgrade-743000" cluster
	I0812 03:36:56.167774    9066 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:36:56.167806    9066 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0812 03:36:56.167814    9066 cache.go:56] Caching tarball of preloaded images
	I0812 03:36:56.167885    9066 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:36:56.167891    9066 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0812 03:36:56.167942    9066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0812 03:36:56.168346    9066 start.go:360] acquireMachinesLock for stopped-upgrade-743000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:36:56.168395    9066 start.go:364] duration metric: took 41.584µs to acquireMachinesLock for "stopped-upgrade-743000"
	I0812 03:36:56.168412    9066 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:36:56.168419    9066 fix.go:54] fixHost starting: 
	I0812 03:36:56.168545    9066 fix.go:112] recreateIfNeeded on stopped-upgrade-743000: state=Stopped err=<nil>
	W0812 03:36:56.168554    9066 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:36:56.176871    9066 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-743000" ...
	I0812 03:36:56.180882    9066 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:36:56.180965    9066 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51428-:22,hostfwd=tcp::51429-:2376,hostname=stopped-upgrade-743000 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/disk.qcow2
	I0812 03:36:56.230302    9066 main.go:141] libmachine: STDOUT: 
	I0812 03:36:56.230332    9066 main.go:141] libmachine: STDERR: 
	I0812 03:36:56.230338    9066 main.go:141] libmachine: Waiting for VM to start (ssh -p 51428 docker@127.0.0.1)...
	I0812 03:37:16.252232    9066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/config.json ...
	I0812 03:37:16.253109    9066 machine.go:94] provisionDockerMachine start ...
	I0812 03:37:16.253295    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.253816    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.253830    9066 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 03:37:16.352855    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 03:37:16.352889    9066 buildroot.go:166] provisioning hostname "stopped-upgrade-743000"
	I0812 03:37:16.353025    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.353286    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.353301    9066 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-743000 && echo "stopped-upgrade-743000" | sudo tee /etc/hostname
	I0812 03:37:16.445903    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-743000
	
	I0812 03:37:16.446017    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.446232    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.446245    9066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-743000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-743000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-743000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 03:37:16.527569    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 03:37:16.527586    9066 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19409-6342/.minikube CaCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19409-6342/.minikube}
	I0812 03:37:16.527603    9066 buildroot.go:174] setting up certificates
	I0812 03:37:16.527612    9066 provision.go:84] configureAuth start
	I0812 03:37:16.527619    9066 provision.go:143] copyHostCerts
	I0812 03:37:16.527707    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem, removing ...
	I0812 03:37:16.527714    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem
	I0812 03:37:16.527885    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.pem (1082 bytes)
	I0812 03:37:16.528119    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem, removing ...
	I0812 03:37:16.528125    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem
	I0812 03:37:16.528184    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/cert.pem (1123 bytes)
	I0812 03:37:16.528307    9066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem, removing ...
	I0812 03:37:16.528312    9066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem
	I0812 03:37:16.528373    9066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19409-6342/.minikube/key.pem (1675 bytes)
	I0812 03:37:16.528471    9066 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-743000 san=[127.0.0.1 localhost minikube stopped-upgrade-743000]
	I0812 03:37:16.567156    9066 provision.go:177] copyRemoteCerts
	I0812 03:37:16.567185    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 03:37:16.567192    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:16.607603    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 03:37:16.614551    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0812 03:37:16.621808    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 03:37:16.629188    9066 provision.go:87] duration metric: took 101.570083ms to configureAuth
	I0812 03:37:16.629197    9066 buildroot.go:189] setting minikube options for container-runtime
	I0812 03:37:16.629315    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:37:16.629353    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.629442    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.629447    9066 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 03:37:16.700439    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0812 03:37:16.700448    9066 buildroot.go:70] root file system type: tmpfs
	I0812 03:37:16.700499    9066 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 03:37:16.700551    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.700673    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.700708    9066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 03:37:16.775779    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 03:37:16.775833    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:16.775940    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:16.775948    9066 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 03:37:17.176367    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0812 03:37:17.176380    9066 machine.go:97] duration metric: took 923.272333ms to provisionDockerMachine
	I0812 03:37:17.176387    9066 start.go:293] postStartSetup for "stopped-upgrade-743000" (driver="qemu2")
	I0812 03:37:17.176394    9066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 03:37:17.176452    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 03:37:17.176464    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:17.217623    9066 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 03:37:17.219445    9066 info.go:137] Remote host: Buildroot 2021.02.12
	I0812 03:37:17.219459    9066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/addons for local assets ...
	I0812 03:37:17.219570    9066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19409-6342/.minikube/files for local assets ...
	I0812 03:37:17.219698    9066 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem -> 68412.pem in /etc/ssl/certs
	I0812 03:37:17.219831    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 03:37:17.224799    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:37:17.233485    9066 start.go:296] duration metric: took 57.09025ms for postStartSetup
	I0812 03:37:17.233506    9066 fix.go:56] duration metric: took 21.065379875s for fixHost
	I0812 03:37:17.233578    9066 main.go:141] libmachine: Using SSH client type: native
	I0812 03:37:17.233702    9066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024dea10] 0x1024e1270 <nil>  [] 0s} localhost 51428 <nil> <nil>}
	I0812 03:37:17.233707    9066 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 03:37:17.311135    9066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459037.311841504
	
	I0812 03:37:17.311145    9066 fix.go:216] guest clock: 1723459037.311841504
	I0812 03:37:17.311150    9066 fix.go:229] Guest: 2024-08-12 03:37:17.311841504 -0700 PDT Remote: 2024-08-12 03:37:17.233509 -0700 PDT m=+21.185372084 (delta=78.332504ms)
	I0812 03:37:17.311165    9066 fix.go:200] guest clock delta is within tolerance: 78.332504ms
	I0812 03:37:17.311167    9066 start.go:83] releasing machines lock for "stopped-upgrade-743000", held for 21.143057916s
	I0812 03:37:17.311241    9066 ssh_runner.go:195] Run: cat /version.json
	I0812 03:37:17.311252    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:37:17.311242    9066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 03:37:17.311359    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	W0812 03:37:17.311796    9066 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51428: connect: connection refused
	I0812 03:37:17.311811    9066 retry.go:31] will retry after 141.237118ms: dial tcp [::1]:51428: connect: connection refused
	W0812 03:37:17.348242    9066 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0812 03:37:17.348292    9066 ssh_runner.go:195] Run: systemctl --version
	I0812 03:37:17.350068    9066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 03:37:17.351800    9066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 03:37:17.351829    9066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0812 03:37:17.354755    9066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0812 03:37:17.359257    9066 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 03:37:17.359268    9066 start.go:495] detecting cgroup driver to use...
	I0812 03:37:17.359349    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:37:17.366585    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0812 03:37:17.370157    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0812 03:37:17.373649    9066 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0812 03:37:17.373688    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0812 03:37:17.376916    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:37:17.379684    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0812 03:37:17.382726    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 03:37:17.386306    9066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 03:37:17.389686    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0812 03:37:17.392704    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0812 03:37:17.395606    9066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0812 03:37:17.398818    9066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 03:37:17.401877    9066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 03:37:17.404404    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:17.481341    9066 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0812 03:37:17.489206    9066 start.go:495] detecting cgroup driver to use...
	I0812 03:37:17.489276    9066 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0812 03:37:17.494441    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:37:17.500354    9066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 03:37:17.545250    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 03:37:17.550059    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 03:37:17.554738    9066 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0812 03:37:17.618953    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 03:37:17.624663    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 03:37:17.630434    9066 ssh_runner.go:195] Run: which cri-dockerd
	I0812 03:37:17.631641    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0812 03:37:17.634297    9066 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0812 03:37:17.639290    9066 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0812 03:37:17.717036    9066 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0812 03:37:17.789158    9066 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0812 03:37:17.789225    9066 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0812 03:37:17.794666    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:17.872511    9066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:37:19.052053    9066 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.179526834s)
	I0812 03:37:19.052109    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0812 03:37:19.056815    9066 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0812 03:37:19.062998    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:37:19.067655    9066 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0812 03:37:19.146721    9066 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0812 03:37:19.229146    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:19.312707    9066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0812 03:37:19.318596    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 03:37:19.323510    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:19.380827    9066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0812 03:37:19.420408    9066 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0812 03:37:19.420482    9066 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0812 03:37:19.422440    9066 start.go:563] Will wait 60s for crictl version
	I0812 03:37:19.422487    9066 ssh_runner.go:195] Run: which crictl
	I0812 03:37:19.424154    9066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 03:37:19.438401    9066 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0812 03:37:19.438463    9066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:37:19.454325    9066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 03:37:19.478672    9066 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0812 03:37:19.478736    9066 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0812 03:37:19.480139    9066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 03:37:19.483605    9066 kubeadm.go:883] updating cluster {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0812 03:37:19.483664    9066 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0812 03:37:19.483707    9066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:37:19.494327    9066 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:37:19.494335    9066 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:37:19.494375    9066 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:37:19.497899    9066 ssh_runner.go:195] Run: which lz4
	I0812 03:37:19.499230    9066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 03:37:19.500451    9066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 03:37:19.500461    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0812 03:37:20.363040    9066 docker.go:649] duration metric: took 863.849875ms to copy over tarball
	I0812 03:37:20.363099    9066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 03:37:21.515821    9066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152715167s)
	I0812 03:37:21.515835    9066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 03:37:21.531902    9066 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0812 03:37:21.535000    9066 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0812 03:37:21.540172    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:21.626396    9066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 03:37:23.174634    9066 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548239334s)
	I0812 03:37:23.174723    9066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 03:37:23.190465    9066 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0812 03:37:23.190479    9066 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0812 03:37:23.190485    9066 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 03:37:23.194593    9066 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.196176    9066 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.198031    9066 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.198362    9066 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.199686    9066 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.199761    9066 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.201051    9066 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.202383    9066 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.202480    9066 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.202801    9066 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.203694    9066 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0812 03:37:23.203761    9066 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.205108    9066 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.205215    9066 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.205720    9066 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0812 03:37:23.206419    9066 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.645538    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.646955    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.656568    9066 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0812 03:37:23.656590    9066 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.656652    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0812 03:37:23.657416    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.658391    9066 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0812 03:37:23.658401    9066 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0812 03:37:23.658424    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0812 03:37:23.662136    9066 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0812 03:37:23.662269    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.675730    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0812 03:37:23.676990    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0812 03:37:23.683650    9066 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0812 03:37:23.683669    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0812 03:37:23.683670    9066 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.683679    9066 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0812 03:37:23.683692    9066 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.683730    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0812 03:37:23.683730    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 03:37:23.697585    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.699864    9066 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0812 03:37:23.699880    9066 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0812 03:37:23.699909    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0812 03:37:23.707188    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.709618    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0812 03:37:23.709754    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0812 03:37:23.709854    9066 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:37:23.720091    9066 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0812 03:37:23.720116    9066 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.720165    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0812 03:37:23.728728    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0812 03:37:23.728849    9066 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0812 03:37:23.729931    9066 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0812 03:37:23.729947    9066 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.729983    9066 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0812 03:37:23.730018    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0812 03:37:23.730036    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0812 03:37:23.749413    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0812 03:37:23.749457    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0812 03:37:23.749469    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0812 03:37:23.749535    9066 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:37:23.761545    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0812 03:37:23.776214    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0812 03:37:23.776243    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0812 03:37:23.781323    9066 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0812 03:37:23.781338    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0812 03:37:23.802880    9066 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0812 03:37:23.802987    9066 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.851785    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0812 03:37:23.851806    9066 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0812 03:37:23.851812    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0812 03:37:23.857120    9066 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0812 03:37:23.857145    9066 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.857206    9066 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:37:23.950035    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0812 03:37:23.950055    9066 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0812 03:37:23.950179    9066 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:37:23.956227    9066 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0812 03:37:23.956258    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0812 03:37:24.024177    9066 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0812 03:37:24.024193    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0812 03:37:24.387838    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0812 03:37:24.387865    9066 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0812 03:37:24.387873    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0812 03:37:24.539036    9066 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0812 03:37:24.539077    9066 cache_images.go:92] duration metric: took 1.348598042s to LoadCachedImages
	W0812 03:37:24.539117    9066 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0812 03:37:24.539129    9066 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0812 03:37:24.539176    9066 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-743000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 03:37:24.539249    9066 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0812 03:37:24.553153    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:37:24.553169    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:37:24.553174    9066 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 03:37:24.553182    9066 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-743000 NodeName:stopped-upgrade-743000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 03:37:24.553245    9066 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-743000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 03:37:24.553303    9066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0812 03:37:24.556111    9066 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 03:37:24.556141    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 03:37:24.558870    9066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0812 03:37:24.564038    9066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 03:37:24.568715    9066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0812 03:37:24.573964    9066 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0812 03:37:24.575435    9066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 03:37:24.579149    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:37:24.664524    9066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:37:24.669580    9066 certs.go:68] Setting up /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000 for IP: 10.0.2.15
	I0812 03:37:24.669590    9066 certs.go:194] generating shared ca certs ...
	I0812 03:37:24.669599    9066 certs.go:226] acquiring lock for ca certs: {Name:mk040c6fb5b98a0bc56f55d23979ed8d77242cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.669774    9066 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key
	I0812 03:37:24.669826    9066 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key
	I0812 03:37:24.669831    9066 certs.go:256] generating profile certs ...
	I0812 03:37:24.669920    9066 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key
	I0812 03:37:24.669937    9066 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c
	I0812 03:37:24.669949    9066 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0812 03:37:24.744477    9066 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c ...
	I0812 03:37:24.744489    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c: {Name:mk9f5c2514d0b4bb1c574718ce8d3c9d47233e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.744918    9066 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c ...
	I0812 03:37:24.744925    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c: {Name:mk3f7bf68d0cf30662080a4152ee1bdf57f4967f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.745089    9066 certs.go:381] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt.5b68802c -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt
	I0812 03:37:24.745230    9066 certs.go:385] copying /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key.5b68802c -> /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key
	I0812 03:37:24.745377    9066 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.key
	I0812 03:37:24.745512    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem (1338 bytes)
	W0812 03:37:24.745540    9066 certs.go:480] ignoring /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841_empty.pem, impossibly tiny 0 bytes
	I0812 03:37:24.745546    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 03:37:24.745573    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem (1082 bytes)
	I0812 03:37:24.745598    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem (1123 bytes)
	I0812 03:37:24.745623    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/key.pem (1675 bytes)
	I0812 03:37:24.745676    9066 certs.go:484] found cert: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem (1708 bytes)
	I0812 03:37:24.745999    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 03:37:24.753240    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 03:37:24.760288    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 03:37:24.767253    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 03:37:24.774603    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0812 03:37:24.782490    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 03:37:24.790639    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 03:37:24.798541    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 03:37:24.806325    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/6841.pem --> /usr/share/ca-certificates/6841.pem (1338 bytes)
	I0812 03:37:24.814047    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/ssl/certs/68412.pem --> /usr/share/ca-certificates/68412.pem (1708 bytes)
	I0812 03:37:24.822082    9066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 03:37:24.829877    9066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 03:37:24.835841    9066 ssh_runner.go:195] Run: openssl version
	I0812 03:37:24.838127    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68412.pem && ln -fs /usr/share/ca-certificates/68412.pem /etc/ssl/certs/68412.pem"
	I0812 03:37:24.841897    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.843543    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:20 /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.843578    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68412.pem
	I0812 03:37:24.845673    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68412.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 03:37:24.849326    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 03:37:24.852514    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.854274    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.854317    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 03:37:24.856522    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 03:37:24.859848    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6841.pem && ln -fs /usr/share/ca-certificates/6841.pem /etc/ssl/certs/6841.pem"
	I0812 03:37:24.863159    9066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.864777    9066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:20 /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.864808    9066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6841.pem
	I0812 03:37:24.866844    9066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6841.pem /etc/ssl/certs/51391683.0"
	I0812 03:37:24.870671    9066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 03:37:24.872357    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 03:37:24.874701    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 03:37:24.877041    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 03:37:24.879377    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 03:37:24.881763    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 03:37:24.883999    9066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 03:37:24.885985    9066 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0812 03:37:24.886065    9066 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:37:24.900919    9066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 03:37:24.904221    9066 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 03:37:24.904228    9066 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 03:37:24.904265    9066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 03:37:24.907314    9066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 03:37:24.907640    9066 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-743000" does not appear in /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:37:24.907739    9066 kubeconfig.go:62] /Users/jenkins/minikube-integration/19409-6342/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-743000" cluster setting kubeconfig missing "stopped-upgrade-743000" context setting]
	I0812 03:37:24.907960    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:37:24.908425    9066 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038744e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:37:24.908758    9066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 03:37:24.911956    9066 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-743000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0812 03:37:24.911964    9066 kubeadm.go:1160] stopping kube-system containers ...
	I0812 03:37:24.912031    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 03:37:24.923439    9066 docker.go:483] Stopping containers: [93391c2226c7 a41e64288824 9306bfbeb4d2 56d45e7374fb 18fa8e4baf80 126b1845793f 07ab03f2f278 2d03e258149f]
	I0812 03:37:24.923513    9066 ssh_runner.go:195] Run: docker stop 93391c2226c7 a41e64288824 9306bfbeb4d2 56d45e7374fb 18fa8e4baf80 126b1845793f 07ab03f2f278 2d03e258149f
	I0812 03:37:24.935740    9066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 03:37:24.941716    9066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:37:24.945017    9066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:37:24.945026    9066 kubeadm.go:157] found existing configuration files:
	
	I0812 03:37:24.945056    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf
	I0812 03:37:24.947622    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:37:24.947666    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:37:24.950820    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf
	I0812 03:37:24.954258    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:37:24.954309    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:37:24.957615    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf
	I0812 03:37:24.960639    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:37:24.960682    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:37:24.963494    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf
	I0812 03:37:24.966293    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:37:24.966319    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:37:24.969892    9066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:37:24.973198    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:24.996343    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.542433    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.673201    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.695564    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 03:37:25.722858    9066 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:37:25.722936    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:26.224999    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:26.724967    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:37:26.729168    9066 api_server.go:72] duration metric: took 1.006326625s to wait for apiserver process to appear ...
	I0812 03:37:26.729178    9066 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:37:26.729187    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:31.731241    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:31.731274    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:36.731578    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:36.731672    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:41.732369    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:41.732403    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:46.733055    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:46.733079    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:51.734184    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:51.734208    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:37:56.735150    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:37:56.735176    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:01.736429    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:01.736485    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:06.737226    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:06.737248    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:11.739113    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:11.739134    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:16.740647    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:16.740691    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:21.742882    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:21.742930    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:26.745115    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:26.745236    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:26.757357    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:26.757428    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:26.769114    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:26.769178    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:26.780624    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:26.780692    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:26.790835    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:26.790909    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:26.800876    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:26.800936    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:26.811453    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:26.811515    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:26.821879    9066 logs.go:276] 0 containers: []
	W0812 03:38:26.821894    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:26.821956    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:26.837172    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:26.837191    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:26.837196    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:26.851525    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:26.851535    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:26.863193    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:26.863208    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:26.900732    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:26.900740    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:26.914435    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:26.914447    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:26.926305    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:26.926316    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:26.947763    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:26.947773    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:27.042962    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:27.042973    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:27.054637    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:27.054654    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:27.067488    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:27.067500    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:27.079086    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:27.079097    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:27.105717    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:27.105727    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:27.121736    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:27.121748    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:27.133336    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:27.133346    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:27.137516    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:27.137524    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:27.165741    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:27.165753    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:27.179693    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:27.179704    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:29.695866    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:34.698178    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:34.698326    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:34.712122    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:34.712208    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:34.724095    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:34.724164    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:34.735536    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:34.735604    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:34.746047    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:34.746118    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:34.756640    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:34.756714    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:34.768976    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:34.769045    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:34.782588    9066 logs.go:276] 0 containers: []
	W0812 03:38:34.782600    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:34.782657    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:34.793181    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:34.793206    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:34.793212    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:34.797378    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:34.797387    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:34.823534    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:34.823548    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:34.861916    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:34.861928    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:34.876471    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:34.876482    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:34.888070    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:34.888082    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:34.900179    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:34.900193    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:34.914055    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:34.914066    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:34.928289    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:34.928300    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:34.939406    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:34.939418    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:34.951054    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:34.951065    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:34.966252    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:34.966263    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:34.984123    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:34.984134    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:35.019458    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:35.019469    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:35.031898    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:35.031910    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:35.043521    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:35.043532    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:35.070171    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:35.070183    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:37.587218    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:42.589818    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:42.590061    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:42.623606    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:42.623734    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:42.640768    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:42.640862    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:42.653731    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:42.653805    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:42.666092    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:42.666170    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:42.676641    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:42.676707    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:42.688744    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:42.688818    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:42.698920    9066 logs.go:276] 0 containers: []
	W0812 03:38:42.698932    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:42.698988    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:42.709487    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:42.709506    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:42.709512    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:42.747857    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:42.747874    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:42.762084    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:42.762095    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:42.782825    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:42.782840    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:42.795311    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:42.795322    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:42.830889    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:42.830904    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:42.848594    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:42.848604    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:42.863732    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:42.863742    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:42.877005    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:42.877016    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:42.892450    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:42.892469    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:42.896862    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:42.896869    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:42.921975    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:42.921990    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:42.933564    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:42.933579    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:42.945602    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:42.945617    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:42.957446    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:42.957456    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:42.982784    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:42.982792    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:42.998320    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:42.998330    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:45.509742    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:50.512096    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:50.512454    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:50.543869    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:50.543997    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:50.562944    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:50.563040    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:50.577176    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:50.577266    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:50.589174    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:50.589252    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:50.600078    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:50.600153    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:50.611577    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:50.611644    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:50.622060    9066 logs.go:276] 0 containers: []
	W0812 03:38:50.622071    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:50.622124    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:50.632767    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:50.632784    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:50.632790    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:50.645670    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:50.645685    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:50.650397    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:50.650405    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:50.684802    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:50.684813    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:50.699069    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:50.699080    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:50.711011    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:50.711021    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:50.724310    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:50.724322    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:50.736679    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:50.736689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:50.754901    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:50.754913    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:50.788272    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:50.788283    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:50.802556    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:50.802567    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:50.813722    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:50.813736    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:50.825356    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:50.825365    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:50.840851    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:50.840866    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:50.880168    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:50.880177    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:38:50.897278    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:50.897292    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:50.922890    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:50.922901    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:53.442409    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:38:58.444898    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:38:58.445117    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:38:58.471958    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:38:58.472086    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:38:58.489818    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:38:58.489906    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:38:58.503627    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:38:58.503700    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:38:58.515827    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:38:58.515894    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:38:58.526137    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:38:58.526200    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:38:58.536757    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:38:58.536822    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:38:58.547278    9066 logs.go:276] 0 containers: []
	W0812 03:38:58.547289    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:38:58.547338    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:38:58.557662    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:38:58.557684    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:38:58.557689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:38:58.569701    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:38:58.569713    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:38:58.595661    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:38:58.595669    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:38:58.599926    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:38:58.599935    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:38:58.614307    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:38:58.614319    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:38:58.640533    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:38:58.640543    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:38:58.651809    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:38:58.651820    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:38:58.663543    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:38:58.663555    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:38:58.675395    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:38:58.675406    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:38:58.686360    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:38:58.686370    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:38:58.701648    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:38:58.701658    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:38:58.736348    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:38:58.736358    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:38:58.751680    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:38:58.751689    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:38:58.764349    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:38:58.764360    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:38:58.803674    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:38:58.803687    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:38:58.817482    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:38:58.817493    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:38:58.832236    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:38:58.832247    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:01.355766    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:06.358110    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:06.358345    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:06.383498    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:06.383620    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:06.400467    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:06.400559    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:06.414355    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:06.414420    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:06.425581    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:06.425649    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:06.435980    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:06.436056    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:06.446791    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:06.446863    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:06.457137    9066 logs.go:276] 0 containers: []
	W0812 03:39:06.457148    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:06.457201    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:06.468092    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:06.468112    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:06.468118    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:06.504738    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:06.504748    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:06.509465    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:06.509471    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:06.534658    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:06.534666    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:06.562585    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:06.562596    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:06.581200    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:06.581211    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:06.592623    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:06.592633    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:06.606966    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:06.606979    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:06.621488    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:06.621498    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:06.632756    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:06.632767    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:06.644107    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:06.644118    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:06.655866    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:06.655877    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:06.673636    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:06.673651    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:06.690878    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:06.690888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:06.702477    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:06.702492    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:06.736807    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:06.736817    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:06.750909    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:06.750922    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:09.266976    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:14.269223    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:14.269594    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:14.312789    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:14.312902    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:14.333893    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:14.333972    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:14.345434    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:14.345510    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:14.356860    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:14.356932    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:14.368470    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:14.368543    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:14.379526    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:14.379591    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:14.390080    9066 logs.go:276] 0 containers: []
	W0812 03:39:14.390092    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:14.390154    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:14.400530    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:14.400548    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:14.400553    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:14.413256    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:14.413266    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:14.427570    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:14.427584    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:14.443440    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:14.443451    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:14.455488    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:14.455498    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:14.468972    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:14.468986    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:14.505229    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:14.505240    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:14.530433    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:14.530448    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:14.544597    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:14.544607    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:14.556679    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:14.556690    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:14.580856    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:14.580865    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:14.593872    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:14.593888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:14.631785    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:14.631795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:14.647164    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:14.647174    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:14.664245    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:14.664259    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:14.678722    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:14.678732    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:14.691341    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:14.691356    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:17.203940    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:22.206231    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:22.206327    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:22.222028    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:22.222092    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:22.232655    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:22.232722    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:22.242924    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:22.242983    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:22.257715    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:22.257785    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:22.267992    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:22.268055    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:22.283945    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:22.284011    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:22.302128    9066 logs.go:276] 0 containers: []
	W0812 03:39:22.302139    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:22.302200    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:22.312297    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:22.312314    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:22.312320    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:22.329738    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:22.329749    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:22.344160    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:22.344174    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:22.355713    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:22.355724    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:22.390232    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:22.390243    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:22.407403    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:22.407414    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:22.418843    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:22.418855    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:22.430708    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:22.430723    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:22.454302    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:22.454314    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:22.468335    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:22.468345    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:22.480004    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:22.480016    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:22.495408    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:22.495417    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:22.534744    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:22.534755    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:22.546897    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:22.546907    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:22.558725    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:22.558741    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:22.570713    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:22.570729    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:22.595128    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:22.595134    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:25.101628    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:30.104057    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:30.104463    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:30.134585    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:30.134726    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:30.153235    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:30.153341    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:30.166959    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:30.167037    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:30.179044    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:30.179111    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:30.189608    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:30.189679    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:30.199996    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:30.200064    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:30.211676    9066 logs.go:276] 0 containers: []
	W0812 03:39:30.211691    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:30.211744    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:30.222089    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:30.222107    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:30.222113    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:30.246541    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:30.246553    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:30.266362    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:30.266374    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:30.280291    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:30.280302    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:30.291887    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:30.291896    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:30.330824    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:30.330832    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:30.344807    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:30.344821    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:30.365805    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:30.365819    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:30.402605    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:30.402621    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:30.420450    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:30.420463    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:30.431847    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:30.431858    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:30.443735    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:30.443745    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:30.454774    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:30.454787    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:30.479240    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:30.479248    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:30.491735    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:30.491750    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:30.496367    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:30.496376    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:30.521755    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:30.521767    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:33.038363    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:38.040840    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:38.041134    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:38.075945    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:38.076073    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:38.094424    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:38.094522    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:38.112814    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:38.112883    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:38.126093    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:38.126165    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:38.136649    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:38.136715    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:38.147817    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:38.147883    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:38.158983    9066 logs.go:276] 0 containers: []
	W0812 03:39:38.158994    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:38.159048    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:38.170055    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:38.170075    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:38.170081    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:38.184168    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:38.184179    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:38.196074    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:38.196087    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:38.217939    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:38.217950    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:38.229958    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:38.229968    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:38.234031    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:38.234038    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:38.245379    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:38.245390    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:38.264728    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:38.264738    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:38.280370    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:38.280380    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:38.318005    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:38.318014    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:38.342216    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:38.342229    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:38.356226    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:38.356236    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:38.371670    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:38.371683    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:38.382653    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:38.382666    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:38.406684    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:38.406694    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:38.445778    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:38.445793    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:38.461224    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:38.461236    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:40.974426    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:45.976802    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:45.976973    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:45.997181    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:45.997257    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:46.011821    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:46.011880    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:46.023216    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:46.023287    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:46.033672    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:46.033745    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:46.044541    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:46.044612    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:46.057527    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:46.057594    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:46.067517    9066 logs.go:276] 0 containers: []
	W0812 03:39:46.067527    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:46.067578    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:46.077916    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:46.077986    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:46.077992    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:46.092420    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:46.092431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:46.103953    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:46.103966    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:46.116493    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:46.116505    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:46.156222    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:46.156232    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:46.170414    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:46.170431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:46.195589    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:46.195600    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:46.208947    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:46.208959    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:46.220582    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:46.220593    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:46.235016    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:46.235030    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:46.249370    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:46.249380    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:46.261172    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:46.261181    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:46.272530    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:46.272541    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:46.277122    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:46.277128    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:46.312197    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:46.312210    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:46.324135    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:46.324146    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:46.341549    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:46.341565    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:48.867817    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:39:53.870049    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:39:53.870269    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:39:53.889444    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:39:53.889532    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:39:53.903017    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:39:53.903092    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:39:53.914475    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:39:53.914534    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:39:53.924951    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:39:53.925019    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:39:53.935333    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:39:53.935402    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:39:53.947576    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:39:53.947647    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:39:53.958047    9066 logs.go:276] 0 containers: []
	W0812 03:39:53.958059    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:39:53.958111    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:39:53.968682    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:39:53.968698    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:39:53.968703    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:39:53.980047    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:39:53.980060    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:39:53.991137    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:39:53.991148    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:39:54.015864    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:39:54.015875    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:39:54.027149    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:39:54.027162    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:39:54.042334    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:39:54.042346    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:39:54.061975    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:39:54.061985    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:39:54.075785    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:39:54.075795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:39:54.099910    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:39:54.099921    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:39:54.111757    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:39:54.111769    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:39:54.123714    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:39:54.123725    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:39:54.136206    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:39:54.136222    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:39:54.176001    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:39:54.176010    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:39:54.180515    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:39:54.180522    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:39:54.215318    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:39:54.215330    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:39:54.230600    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:39:54.230618    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:39:54.245169    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:39:54.245185    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:39:56.758613    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:01.760109    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:01.760220    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:01.772441    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:01.772516    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:01.783608    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:01.783683    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:01.793678    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:01.793750    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:01.804053    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:01.804119    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:01.814278    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:01.814357    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:01.824473    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:01.824537    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:01.838822    9066 logs.go:276] 0 containers: []
	W0812 03:40:01.838833    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:01.838886    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:01.849698    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:01.849714    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:01.849719    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:01.864043    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:01.864055    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:01.875931    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:01.875942    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:01.887852    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:01.887862    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:01.912290    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:01.912302    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:01.926983    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:01.926994    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:01.931381    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:01.931391    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:01.945495    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:01.945505    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:01.959347    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:01.959360    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:01.970778    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:01.970789    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:01.982279    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:01.982290    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:02.006014    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:02.006024    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:02.017177    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:02.017187    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:02.053802    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:02.053811    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:02.069006    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:02.069017    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:02.091774    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:02.091785    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:02.103692    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:02.103702    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:04.643293    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:09.645463    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:09.645568    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:09.660988    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:09.661056    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:09.675671    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:09.675741    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:09.694064    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:09.694130    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:09.704527    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:09.704586    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:09.715251    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:09.715318    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:09.725980    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:09.726042    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:09.738405    9066 logs.go:276] 0 containers: []
	W0812 03:40:09.738416    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:09.738469    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:09.749107    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:09.749124    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:09.749129    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:09.762675    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:09.762686    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:09.774563    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:09.774574    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:09.789209    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:09.789220    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:09.793342    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:09.793351    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:09.829773    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:09.829788    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:09.844593    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:09.844604    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:09.856535    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:09.856545    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:09.874600    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:09.874610    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:09.886029    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:09.886038    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:09.897247    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:09.897256    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:09.908272    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:09.908284    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:09.945387    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:09.945396    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:09.956700    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:09.956713    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:09.980933    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:09.980941    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:09.993190    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:09.993201    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:10.025666    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:10.025681    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:12.541837    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:17.544492    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:17.544833    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:17.580988    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:17.581125    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:17.601692    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:17.601815    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:17.616720    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:17.616796    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:17.635355    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:17.635430    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:17.646476    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:17.646545    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:17.657163    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:17.657241    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:17.669435    9066 logs.go:276] 0 containers: []
	W0812 03:40:17.669448    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:17.669511    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:17.679944    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:17.679962    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:17.679968    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:17.697598    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:17.697608    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:17.735959    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:17.735972    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:17.749865    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:17.749876    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:17.764000    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:17.764011    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:17.775975    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:17.775988    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:17.787450    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:17.787462    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:17.798729    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:17.798739    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:17.822580    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:17.822587    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:17.834991    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:17.835002    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:17.850258    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:17.850273    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:17.861470    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:17.861482    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:17.865909    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:17.865917    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:17.899324    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:17.899336    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:17.924774    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:17.924784    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:17.939100    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:17.939112    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:17.950836    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:17.950846    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:20.464423    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:25.467050    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:25.467403    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:25.500276    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:25.500408    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:25.520516    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:25.520616    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:25.534342    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:25.534418    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:25.546564    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:25.546643    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:25.557352    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:25.557422    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:25.568054    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:25.568119    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:25.579897    9066 logs.go:276] 0 containers: []
	W0812 03:40:25.579918    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:25.579981    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:25.594361    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:25.594378    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:25.594383    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:25.599191    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:25.599199    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:25.622241    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:25.622254    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:25.642105    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:25.642116    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:25.654165    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:25.654176    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:25.692828    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:25.692841    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:25.718489    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:25.718500    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:25.734470    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:25.734485    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:25.758247    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:25.758254    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:25.775198    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:25.775210    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:25.810251    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:25.810266    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:25.825445    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:25.825457    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:25.837548    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:25.837564    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:25.849365    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:25.849376    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:25.861185    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:25.861195    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:25.874843    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:25.874854    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:25.888974    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:25.888983    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:28.405644    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:33.408385    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:33.408709    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:33.441544    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:33.441676    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:33.460757    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:33.460857    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:33.479576    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:33.479656    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:33.494115    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:33.494179    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:33.505072    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:33.505140    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:33.521557    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:33.521631    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:33.531625    9066 logs.go:276] 0 containers: []
	W0812 03:40:33.531636    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:33.531689    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:33.542338    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:33.542357    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:33.542363    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:33.555071    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:33.555084    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:33.567610    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:33.567623    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:33.572360    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:33.572371    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:33.598209    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:33.598220    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:33.613064    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:33.613077    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:33.630443    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:33.630457    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:33.648862    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:33.648877    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:33.686352    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:33.686375    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:33.701878    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:33.701904    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:33.715102    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:33.715115    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:33.727020    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:33.727033    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:33.752410    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:33.752424    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:33.786723    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:33.786734    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:33.805965    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:33.805976    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:33.821232    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:33.821242    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:33.832934    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:33.832948    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:36.346446    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:41.348777    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:41.349307    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:41.364427    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:41.364517    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:41.376648    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:41.376715    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:41.389830    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:41.389895    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:41.399993    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:41.400062    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:41.410081    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:41.410150    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:41.420865    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:41.420931    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:41.431006    9066 logs.go:276] 0 containers: []
	W0812 03:40:41.431018    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:41.431072    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:41.441499    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:41.441517    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:41.441523    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:41.458790    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:41.458803    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:41.483480    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:41.483488    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:41.498287    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:41.498298    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:41.514144    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:41.514156    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:41.525997    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:41.526007    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:41.538137    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:41.538147    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:41.549978    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:41.549993    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:41.554485    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:41.554494    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:41.590216    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:41.590228    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:41.604734    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:41.604746    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:41.619396    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:41.619408    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:41.657201    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:41.657215    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:41.682063    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:41.682075    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:41.693908    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:41.693923    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:41.715825    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:41.715840    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:41.734064    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:41.734076    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:44.247995    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:49.250315    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:49.250466    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:49.263458    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:49.263539    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:49.274357    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:49.274429    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:49.285263    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:49.285331    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:49.295566    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:49.295637    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:49.305599    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:49.305667    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:49.317385    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:49.317444    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:49.327617    9066 logs.go:276] 0 containers: []
	W0812 03:40:49.327634    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:49.327690    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:49.340175    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:49.340194    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:49.340200    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:49.351553    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:49.351564    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:49.376043    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:49.376053    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:49.414418    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:49.414431    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:49.426517    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:49.426532    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:49.437548    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:49.437558    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:40:49.454702    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:49.454713    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:49.472107    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:49.472116    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:49.487668    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:49.487678    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:49.499479    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:49.499490    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:49.540179    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:49.540190    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:49.554889    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:49.554901    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:49.579779    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:49.579790    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:49.598884    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:49.598895    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:49.603517    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:49.603524    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:49.627730    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:49.627740    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:49.639237    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:49.639251    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:52.151487    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:40:57.153733    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:40:57.153879    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:40:57.165531    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:40:57.165607    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:40:57.176720    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:40:57.176802    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:40:57.187449    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:40:57.187524    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:40:57.206217    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:40:57.206287    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:40:57.217252    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:40:57.217327    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:40:57.230276    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:40:57.230342    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:40:57.240247    9066 logs.go:276] 0 containers: []
	W0812 03:40:57.240259    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:40:57.240315    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:40:57.250727    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:40:57.250745    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:40:57.250750    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:40:57.265769    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:40:57.265788    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:40:57.279972    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:40:57.279983    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:40:57.291398    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:40:57.291409    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:40:57.303178    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:40:57.303187    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:40:57.326319    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:40:57.326327    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:40:57.330544    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:40:57.330550    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:40:57.364784    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:40:57.364795    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:40:57.379836    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:40:57.379847    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:40:57.391802    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:40:57.391814    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:40:57.403110    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:40:57.403121    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:40:57.415877    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:40:57.415888    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:40:57.452486    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:40:57.452494    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:40:57.469973    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:40:57.469984    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:40:57.482128    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:40:57.482141    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:40:57.507401    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:40:57.507411    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:40:57.518593    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:40:57.518605    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:00.038315    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:05.039038    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:05.039199    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:05.053115    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:05.053201    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:05.064215    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:05.064281    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:05.075388    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:05.075460    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:05.088861    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:05.088931    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:05.099073    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:05.099141    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:05.110918    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:05.110993    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:05.121460    9066 logs.go:276] 0 containers: []
	W0812 03:41:05.121472    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:05.121531    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:05.132268    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:05.132286    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:05.132291    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:05.145752    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:05.145761    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:05.160421    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:05.160433    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:05.172598    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:05.172614    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:05.188856    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:05.188866    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:05.200254    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:05.200266    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:05.204340    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:05.204348    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:05.240415    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:05.240429    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:05.252167    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:05.252178    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:05.275544    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:05.275555    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:05.287942    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:05.287955    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:05.299280    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:05.299289    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:05.314441    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:05.314451    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:05.328476    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:05.328489    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:05.340001    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:05.340012    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:05.357411    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:05.357424    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:05.395520    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:05.395530    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:07.923452    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:12.926151    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:12.926618    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:12.957725    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:12.957854    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:12.986303    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:12.986391    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:12.998848    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:12.998925    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:13.009911    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:13.009978    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:13.034200    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:13.034274    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:13.060870    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:13.060945    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:13.071262    9066 logs.go:276] 0 containers: []
	W0812 03:41:13.071277    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:13.071335    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:13.082250    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:13.082268    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:13.082274    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:13.107597    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:13.107613    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:13.119335    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:13.119351    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:13.130706    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:13.130719    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:13.142747    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:13.142759    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:13.180220    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:13.180233    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:13.194385    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:13.194398    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:13.206385    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:13.206399    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:13.217951    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:13.217965    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:13.255077    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:13.255086    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:13.259307    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:13.259316    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:13.273306    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:13.273320    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:13.284555    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:13.284566    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:13.301477    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:13.301487    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:13.315685    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:13.315695    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:13.330591    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:13.330602    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:13.342192    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:13.342203    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:15.867480    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:20.870076    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:20.870472    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:41:20.903179    9066 logs.go:276] 2 containers: [15af67a3c837 18fa8e4baf80]
	I0812 03:41:20.903297    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:41:20.921655    9066 logs.go:276] 2 containers: [957c5a7cd92d 56d45e7374fb]
	I0812 03:41:20.921735    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:41:20.935541    9066 logs.go:276] 1 containers: [1db52268e1ee]
	I0812 03:41:20.935604    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:41:20.948020    9066 logs.go:276] 2 containers: [dd9eeeddd568 a41e64288824]
	I0812 03:41:20.948088    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:41:20.959127    9066 logs.go:276] 1 containers: [e59ff89fc210]
	I0812 03:41:20.959194    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:41:20.977603    9066 logs.go:276] 2 containers: [1e496ebb6115 93391c2226c7]
	I0812 03:41:20.977670    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:41:20.989198    9066 logs.go:276] 0 containers: []
	W0812 03:41:20.989214    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:41:20.989266    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:41:21.006359    9066 logs.go:276] 2 containers: [b84f3691d84f 91026c3638d8]
	I0812 03:41:21.006378    9066 logs.go:123] Gathering logs for etcd [56d45e7374fb] ...
	I0812 03:41:21.006382    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d45e7374fb"
	I0812 03:41:21.026294    9066 logs.go:123] Gathering logs for coredns [1db52268e1ee] ...
	I0812 03:41:21.026305    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db52268e1ee"
	I0812 03:41:21.042744    9066 logs.go:123] Gathering logs for kube-proxy [e59ff89fc210] ...
	I0812 03:41:21.042756    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59ff89fc210"
	I0812 03:41:21.060782    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:41:21.060791    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:41:21.083921    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:41:21.083929    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:41:21.088103    9066 logs.go:123] Gathering logs for etcd [957c5a7cd92d] ...
	I0812 03:41:21.088109    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957c5a7cd92d"
	I0812 03:41:21.102246    9066 logs.go:123] Gathering logs for storage-provisioner [91026c3638d8] ...
	I0812 03:41:21.102257    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91026c3638d8"
	I0812 03:41:21.114459    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:41:21.114470    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:41:21.126776    9066 logs.go:123] Gathering logs for kube-apiserver [15af67a3c837] ...
	I0812 03:41:21.126787    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15af67a3c837"
	I0812 03:41:21.141229    9066 logs.go:123] Gathering logs for kube-apiserver [18fa8e4baf80] ...
	I0812 03:41:21.141240    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18fa8e4baf80"
	I0812 03:41:21.165835    9066 logs.go:123] Gathering logs for kube-controller-manager [1e496ebb6115] ...
	I0812 03:41:21.165846    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e496ebb6115"
	I0812 03:41:21.188186    9066 logs.go:123] Gathering logs for kube-controller-manager [93391c2226c7] ...
	I0812 03:41:21.188197    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93391c2226c7"
	I0812 03:41:21.200036    9066 logs.go:123] Gathering logs for kube-scheduler [dd9eeeddd568] ...
	I0812 03:41:21.200047    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd9eeeddd568"
	I0812 03:41:21.211372    9066 logs.go:123] Gathering logs for kube-scheduler [a41e64288824] ...
	I0812 03:41:21.211383    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41e64288824"
	I0812 03:41:21.226852    9066 logs.go:123] Gathering logs for storage-provisioner [b84f3691d84f] ...
	I0812 03:41:21.226862    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f3691d84f"
	I0812 03:41:21.238676    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:41:21.238688    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:41:21.277763    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:41:21.277771    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:41:23.822071    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:28.823989    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:28.824107    9066 kubeadm.go:597] duration metric: took 4m3.92323025s to restartPrimaryControlPlane
	W0812 03:41:28.824188    9066 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 03:41:28.824226    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0812 03:41:29.861596    9066 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037366334s)
	I0812 03:41:29.861673    9066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 03:41:29.866697    9066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 03:41:29.869631    9066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 03:41:29.872518    9066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 03:41:29.872524    9066 kubeadm.go:157] found existing configuration files:
	
	I0812 03:41:29.872548    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf
	I0812 03:41:29.875136    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 03:41:29.875158    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 03:41:29.878104    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf
	I0812 03:41:29.881013    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 03:41:29.881033    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 03:41:29.883451    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf
	I0812 03:41:29.886123    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 03:41:29.886150    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 03:41:29.889021    9066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf
	I0812 03:41:29.891393    9066 kubeadm.go:163] "https://control-plane.minikube.internal:51463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 03:41:29.891412    9066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 03:41:29.894246    9066 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 03:41:29.913748    9066 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0812 03:41:29.913781    9066 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 03:41:29.960422    9066 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 03:41:29.960479    9066 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 03:41:29.960532    9066 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 03:41:30.009410    9066 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 03:41:30.013566    9066 out.go:204]   - Generating certificates and keys ...
	I0812 03:41:30.013633    9066 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 03:41:30.013667    9066 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 03:41:30.013717    9066 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 03:41:30.013747    9066 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 03:41:30.013783    9066 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 03:41:30.013809    9066 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 03:41:30.013854    9066 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 03:41:30.013886    9066 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 03:41:30.013928    9066 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 03:41:30.013968    9066 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 03:41:30.013992    9066 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 03:41:30.014025    9066 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 03:41:30.113789    9066 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 03:41:30.158655    9066 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 03:41:30.254072    9066 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 03:41:30.394554    9066 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 03:41:30.423983    9066 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 03:41:30.424351    9066 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 03:41:30.424380    9066 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 03:41:30.513844    9066 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 03:41:30.521335    9066 out.go:204]   - Booting up control plane ...
	I0812 03:41:30.521444    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 03:41:30.521495    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 03:41:30.521532    9066 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 03:41:30.521620    9066 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 03:41:30.521725    9066 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 03:41:35.018532    9066 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502011 seconds
	I0812 03:41:35.018778    9066 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 03:41:35.025904    9066 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 03:41:35.535408    9066 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 03:41:35.535521    9066 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-743000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 03:41:36.039516    9066 kubeadm.go:310] [bootstrap-token] Using token: ib1xsa.uqweb83p8pru5fi1
	I0812 03:41:36.042766    9066 out.go:204]   - Configuring RBAC rules ...
	I0812 03:41:36.042819    9066 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 03:41:36.042906    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 03:41:36.047132    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 03:41:36.047942    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 03:41:36.048780    9066 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 03:41:36.049789    9066 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 03:41:36.052922    9066 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 03:41:36.230783    9066 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 03:41:36.443936    9066 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 03:41:36.444333    9066 kubeadm.go:310] 
	I0812 03:41:36.444365    9066 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 03:41:36.444368    9066 kubeadm.go:310] 
	I0812 03:41:36.444407    9066 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 03:41:36.444411    9066 kubeadm.go:310] 
	I0812 03:41:36.444424    9066 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 03:41:36.444461    9066 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 03:41:36.444490    9066 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 03:41:36.444495    9066 kubeadm.go:310] 
	I0812 03:41:36.444521    9066 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 03:41:36.444524    9066 kubeadm.go:310] 
	I0812 03:41:36.444550    9066 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 03:41:36.444553    9066 kubeadm.go:310] 
	I0812 03:41:36.444578    9066 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 03:41:36.444618    9066 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 03:41:36.444660    9066 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 03:41:36.444664    9066 kubeadm.go:310] 
	I0812 03:41:36.444708    9066 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 03:41:36.444751    9066 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 03:41:36.444756    9066 kubeadm.go:310] 
	I0812 03:41:36.444796    9066 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ib1xsa.uqweb83p8pru5fi1 \
	I0812 03:41:36.444853    9066 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a \
	I0812 03:41:36.444863    9066 kubeadm.go:310] 	--control-plane 
	I0812 03:41:36.444868    9066 kubeadm.go:310] 
	I0812 03:41:36.444912    9066 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 03:41:36.444917    9066 kubeadm.go:310] 
	I0812 03:41:36.444957    9066 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ib1xsa.uqweb83p8pru5fi1 \
	I0812 03:41:36.445017    9066 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a3a24dc3606022793e481fb5bba25e8937e026ae56b76602b092063eafcc562a 
	I0812 03:41:36.445300    9066 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 03:41:36.445308    9066 cni.go:84] Creating CNI manager for ""
	I0812 03:41:36.445317    9066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:41:36.449656    9066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 03:41:36.456585    9066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 03:41:36.460544    9066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 03:41:36.466330    9066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 03:41:36.466385    9066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 03:41:36.466443    9066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-743000 minikube.k8s.io/updated_at=2024_08_12T03_41_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=stopped-upgrade-743000 minikube.k8s.io/primary=true
	I0812 03:41:36.509340    9066 ops.go:34] apiserver oom_adj: -16
	I0812 03:41:36.509348    9066 kubeadm.go:1113] duration metric: took 43.012042ms to wait for elevateKubeSystemPrivileges
	I0812 03:41:36.509358    9066 kubeadm.go:394] duration metric: took 4m11.6268415s to StartCluster
	I0812 03:41:36.509368    9066 settings.go:142] acquiring lock: {Name:mk405bca217b1764467e7caec79ed71135791229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:41:36.509453    9066 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:41:36.509857    9066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/kubeconfig: {Name:mkb70885d9201a61b449567803d8de7b739c5101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:41:36.510071    9066 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:41:36.510076    9066 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 03:41:36.510115    9066 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-743000"
	I0812 03:41:36.510120    9066 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-743000"
	I0812 03:41:36.510133    9066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-743000"
	I0812 03:41:36.510156    9066 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-743000"
	W0812 03:41:36.510189    9066 addons.go:243] addon storage-provisioner should already be in state true
	I0812 03:41:36.510199    9066 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0812 03:41:36.510162    9066 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:41:36.514442    9066 out.go:177] * Verifying Kubernetes components...
	I0812 03:41:36.515095    9066 kapi.go:59] client config for stopped-upgrade-743000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/stopped-upgrade-743000/client.key", CAFile:"/Users/jenkins/minikube-integration/19409-6342/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038744e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 03:41:36.518814    9066 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-743000"
	W0812 03:41:36.518821    9066 addons.go:243] addon default-storageclass should already be in state true
	I0812 03:41:36.518829    9066 host.go:66] Checking if "stopped-upgrade-743000" exists ...
	I0812 03:41:36.519460    9066 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 03:41:36.519468    9066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 03:41:36.519474    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:41:36.522584    9066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 03:41:36.526606    9066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 03:41:36.530624    9066 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:41:36.530630    9066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 03:41:36.530639    9066 sshutil.go:53] new ssh client: &{IP:localhost Port:51428 SSHKeyPath:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/stopped-upgrade-743000/id_rsa Username:docker}
	I0812 03:41:36.626442    9066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 03:41:36.631839    9066 api_server.go:52] waiting for apiserver process to appear ...
	I0812 03:41:36.631886    9066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 03:41:36.635663    9066 api_server.go:72] duration metric: took 125.583583ms to wait for apiserver process to appear ...
	I0812 03:41:36.635670    9066 api_server.go:88] waiting for apiserver healthz status ...
	I0812 03:41:36.635677    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:36.699965    9066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 03:41:36.723552    9066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 03:41:41.637740    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:41.637773    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:46.638013    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:46.638058    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:51.638845    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:51.638871    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:41:56.639355    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:41:56.639399    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:01.640195    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:01.640221    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:06.641558    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:06.641591    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0812 03:42:07.044729    9066 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0812 03:42:07.052863    9066 out.go:177] * Enabled addons: storage-provisioner
	I0812 03:42:07.060019    9066 addons.go:510] duration metric: took 30.550351334s for enable addons: enabled=[storage-provisioner]
	I0812 03:42:11.641754    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:11.641784    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:16.643032    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:16.643055    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:21.644903    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:21.644956    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:26.647187    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:26.647209    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:31.649385    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:31.649459    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:36.651687    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:36.651794    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:42:36.662740    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:42:36.662808    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:42:36.673990    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:42:36.674068    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:42:36.685233    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:42:36.685298    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:42:36.695637    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:42:36.695703    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:42:36.706505    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:42:36.706581    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:42:36.717448    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:42:36.717507    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:42:36.728066    9066 logs.go:276] 0 containers: []
	W0812 03:42:36.728078    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:42:36.728127    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:42:36.738835    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:42:36.738850    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:42:36.738857    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:42:36.763960    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:42:36.763967    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:42:36.798632    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:42:36.798643    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:42:36.836919    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:42:36.836932    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:42:36.848649    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:42:36.848659    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:42:36.860389    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:42:36.860402    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:42:36.876563    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:42:36.876574    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:42:36.889253    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:42:36.889266    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:42:36.907814    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:42:36.907825    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:42:36.919776    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:42:36.919787    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:42:36.923928    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:42:36.923935    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:42:36.938916    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:42:36.938927    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:42:36.953845    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:42:36.953855    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:42:39.467379    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:44.470089    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:44.470555    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:42:44.508878    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:42:44.508994    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:42:44.530027    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:42:44.530127    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:42:44.545516    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:42:44.545580    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:42:44.558167    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:42:44.558237    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:42:44.569256    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:42:44.569324    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:42:44.584484    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:42:44.584547    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:42:44.595502    9066 logs.go:276] 0 containers: []
	W0812 03:42:44.595515    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:42:44.595570    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:42:44.606422    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:42:44.606435    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:42:44.606441    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:42:44.642275    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:42:44.642286    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:42:44.647054    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:42:44.647060    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:42:44.659451    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:42:44.659464    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:42:44.672937    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:42:44.672950    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:42:44.684506    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:42:44.684517    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:42:44.696382    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:42:44.696394    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:42:44.719289    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:42:44.719297    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:42:44.756447    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:42:44.756458    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:42:44.770852    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:42:44.770865    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:42:44.785763    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:42:44.785775    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:42:44.800984    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:42:44.800997    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:42:44.814165    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:42:44.814176    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:42:47.333830    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:42:52.336588    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:42:52.336951    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:42:52.371911    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:42:52.372036    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:42:52.391095    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:42:52.391175    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:42:52.405564    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:42:52.405633    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:42:52.418002    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:42:52.418062    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:42:52.429352    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:42:52.429423    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:42:52.440506    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:42:52.440570    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:42:52.451087    9066 logs.go:276] 0 containers: []
	W0812 03:42:52.451097    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:42:52.451148    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:42:52.461482    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:42:52.461501    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:42:52.461506    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:42:52.497067    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:42:52.497077    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:42:52.512650    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:42:52.512663    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:42:52.526650    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:42:52.526661    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:42:52.541226    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:42:52.541238    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:42:52.553612    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:42:52.553629    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:42:52.571268    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:42:52.571279    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:42:52.575543    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:42:52.575550    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:42:52.609893    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:42:52.609904    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:42:52.622201    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:42:52.622211    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:42:52.636955    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:42:52.636966    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:42:52.648215    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:42:52.648225    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:42:52.671094    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:42:52.671102    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:42:55.184258    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:00.186984    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:00.187327    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:00.219033    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:00.219145    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:00.238402    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:00.238489    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:00.252467    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:00.252536    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:00.264365    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:00.264428    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:00.274843    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:00.274901    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:00.285580    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:00.285645    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:00.295903    9066 logs.go:276] 0 containers: []
	W0812 03:43:00.295914    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:00.295967    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:00.308843    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:00.308859    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:00.308864    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:00.342897    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:00.342911    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:00.357282    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:00.357296    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:00.369184    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:00.369195    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:00.384797    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:00.384807    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:00.403008    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:00.403020    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:00.415552    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:00.415564    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:00.420021    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:00.420031    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:00.433818    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:00.433830    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:00.453712    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:00.453725    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:00.465752    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:00.465760    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:00.479386    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:00.479399    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:00.503865    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:00.503874    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:03.037246    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:08.039726    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:08.040105    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:08.079506    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:08.079638    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:08.100489    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:08.100588    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:08.121497    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:08.121571    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:08.133255    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:08.133324    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:08.144191    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:08.144257    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:08.154527    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:08.154589    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:08.165762    9066 logs.go:276] 0 containers: []
	W0812 03:43:08.165771    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:08.165818    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:08.176288    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:08.176303    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:08.176313    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:08.211634    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:08.211643    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:08.227847    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:08.227861    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:08.239541    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:08.239554    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:08.251511    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:08.251521    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:08.266305    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:08.266317    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:08.284677    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:08.284689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:08.296051    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:08.296065    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:08.331607    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:08.331616    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:08.335886    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:08.335894    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:08.350108    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:08.350120    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:08.361362    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:08.361373    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:08.384893    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:08.384901    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:10.898710    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:15.901416    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:15.901811    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:15.948447    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:15.948600    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:15.968557    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:15.968653    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:15.982615    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:15.982693    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:15.994930    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:15.995002    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:16.005547    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:16.005619    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:16.015941    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:16.016003    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:16.026590    9066 logs.go:276] 0 containers: []
	W0812 03:43:16.026602    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:16.026658    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:16.037513    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:16.037527    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:16.037532    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:16.059111    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:16.059123    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:16.094732    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:16.094742    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:16.099052    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:16.099060    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:16.134275    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:16.134289    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:16.149445    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:16.149455    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:16.161748    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:16.161763    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:16.181841    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:16.181855    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:16.197655    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:16.197668    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:16.210319    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:16.210327    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:16.233635    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:16.233642    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:16.245325    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:16.245339    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:16.259853    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:16.259864    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:18.774123    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:23.776738    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:23.777098    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:23.817726    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:23.817860    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:23.841088    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:23.841193    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:23.855771    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:23.855848    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:23.868638    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:23.868708    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:23.879293    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:23.879359    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:23.889805    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:23.889879    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:23.900566    9066 logs.go:276] 0 containers: []
	W0812 03:43:23.900575    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:23.900624    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:23.910923    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:23.910936    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:23.910941    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:23.924641    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:23.924654    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:23.936262    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:23.936275    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:23.952919    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:23.952932    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:23.964884    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:23.964895    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:23.998702    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:23.998711    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:24.002965    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:24.002974    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:24.037392    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:24.037403    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:24.048654    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:24.048667    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:24.073257    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:24.073266    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:24.085067    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:24.085080    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:24.102468    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:24.102481    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:24.117242    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:24.117253    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:26.636264    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:31.636614    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:31.636860    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:31.662007    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:31.662116    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:31.678502    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:31.678577    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:31.691105    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:31.691168    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:31.701813    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:31.701889    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:31.712572    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:31.712650    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:31.725700    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:31.725765    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:31.735892    9066 logs.go:276] 0 containers: []
	W0812 03:43:31.735903    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:31.735960    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:31.745725    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:31.745742    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:31.745746    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:31.770913    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:31.770920    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:31.782006    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:31.782020    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:31.816924    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:31.816934    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:31.830777    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:31.830788    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:31.842017    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:31.842031    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:31.853932    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:31.853944    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:31.869136    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:31.869148    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:31.890415    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:31.890429    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:31.895543    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:31.895556    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:31.932636    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:31.932646    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:31.947177    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:31.947189    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:31.958692    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:31.958703    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:34.475878    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:39.478133    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:39.478314    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:39.493509    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:39.493595    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:39.506331    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:39.506398    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:39.517556    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:39.517629    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:39.528114    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:39.528185    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:39.539077    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:39.539140    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:39.549857    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:39.549926    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:39.560138    9066 logs.go:276] 0 containers: []
	W0812 03:43:39.560150    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:39.560208    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:39.570789    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:39.570803    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:39.570808    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:39.575182    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:39.575191    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:39.609788    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:39.609802    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:39.624638    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:39.624648    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:39.647864    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:39.647875    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:39.659487    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:39.659501    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:39.670516    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:39.670527    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:39.694819    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:39.694829    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:39.728155    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:39.728162    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:39.742054    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:39.742064    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:39.753808    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:39.753820    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:39.765936    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:39.765946    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:39.782947    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:39.782956    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:42.296624    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:47.299376    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:47.299825    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:47.341653    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:47.341798    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:47.363815    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:47.363931    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:47.379419    9066 logs.go:276] 2 containers: [b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:47.379487    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:47.392741    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:47.392811    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:47.404273    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:47.404339    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:47.417218    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:47.417288    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:47.428894    9066 logs.go:276] 0 containers: []
	W0812 03:43:47.428907    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:47.428963    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:47.439855    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:47.439869    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:47.439875    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:47.454670    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:47.454680    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:47.473364    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:47.473376    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:47.498701    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:47.498711    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:47.510509    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:47.510523    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:47.545042    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:47.545063    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:47.552133    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:47.552145    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:47.592828    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:47.592842    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:47.619175    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:47.619192    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:47.652032    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:47.652055    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:47.671504    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:47.671517    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:47.692488    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:47.692499    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:47.728335    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:47.728351    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:50.252564    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:43:55.254752    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:43:55.255155    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:43:55.305786    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:43:55.305895    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:43:55.326491    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:43:55.326547    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:43:55.340078    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:43:55.340146    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:43:55.352138    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:43:55.352196    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:43:55.363031    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:43:55.363102    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:43:55.373970    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:43:55.374029    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:43:55.385100    9066 logs.go:276] 0 containers: []
	W0812 03:43:55.385112    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:43:55.385160    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:43:55.395866    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:43:55.395886    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:43:55.395891    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:43:55.409144    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:43:55.409161    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:43:55.424502    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:43:55.424512    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:43:55.443696    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:43:55.443707    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:43:55.476566    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:43:55.476576    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:43:55.490525    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:43:55.490540    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:43:55.506212    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:43:55.506221    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:43:55.519810    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:43:55.519819    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:43:55.544647    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:43:55.544654    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:43:55.580683    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:43:55.580695    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:43:55.599905    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:43:55.599915    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:43:55.615927    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:43:55.615936    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:43:55.627792    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:43:55.627805    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:43:55.642453    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:43:55.642463    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:43:55.646764    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:43:55.646770    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:43:58.161876    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:03.164544    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:03.164597    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:03.176906    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:03.176975    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:03.190300    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:03.190365    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:03.203408    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:03.203468    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:03.214923    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:03.214990    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:03.226846    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:03.226901    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:03.242629    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:03.242717    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:03.255204    9066 logs.go:276] 0 containers: []
	W0812 03:44:03.255215    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:03.255266    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:03.267425    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:03.267443    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:03.267449    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:03.280805    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:03.280816    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:03.299150    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:03.299161    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:03.312527    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:03.312538    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:03.326993    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:03.327005    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:03.340796    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:03.340808    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:03.346078    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:03.346091    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:03.361600    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:03.361612    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:03.375108    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:03.375120    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:03.392198    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:03.392212    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:03.414850    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:03.414862    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:03.429760    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:03.429772    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:03.455622    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:03.455637    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:03.496438    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:03.496450    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:03.512487    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:03.512507    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:06.051740    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:11.054593    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:11.055058    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:11.094231    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:11.094355    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:11.116199    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:11.116313    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:11.131397    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:11.131471    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:11.147312    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:11.147379    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:11.158575    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:11.158641    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:11.169386    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:11.169448    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:11.179702    9066 logs.go:276] 0 containers: []
	W0812 03:44:11.179713    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:11.179770    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:11.189929    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:11.189946    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:11.189951    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:11.194292    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:11.194298    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:11.206175    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:11.206189    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:11.217938    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:11.217951    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:11.241408    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:11.241417    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:11.254268    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:11.254281    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:11.287178    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:11.287184    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:11.320937    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:11.320948    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:11.338426    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:11.338436    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:11.349921    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:11.349931    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:11.363828    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:11.363838    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:11.386791    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:11.386801    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:11.404746    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:11.404759    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:11.419564    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:11.419576    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:11.431192    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:11.431204    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:13.945073    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:18.947658    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:18.948111    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:18.987083    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:18.987215    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:19.012101    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:19.012187    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:19.027036    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:19.027109    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:19.044228    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:19.044296    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:19.054691    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:19.054756    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:19.065782    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:19.065851    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:19.079777    9066 logs.go:276] 0 containers: []
	W0812 03:44:19.079789    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:19.079846    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:19.091139    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:19.091161    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:19.091168    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:19.126428    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:19.126440    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:19.144024    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:19.144035    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:19.158980    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:19.158993    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:19.163099    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:19.163106    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:19.174688    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:19.174700    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:19.186914    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:19.186926    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:19.198760    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:19.198773    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:19.215965    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:19.215977    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:19.227402    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:19.227411    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:19.252112    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:19.252125    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:19.264076    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:19.264089    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:19.299485    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:19.299495    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:19.314245    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:19.314258    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:19.325508    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:19.325521    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:21.842341    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:26.844770    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:26.844975    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:26.866955    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:26.867073    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:26.882949    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:26.883026    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:26.895547    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:26.895621    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:26.907306    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:26.907377    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:26.918324    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:26.918408    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:26.929660    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:26.929719    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:26.941280    9066 logs.go:276] 0 containers: []
	W0812 03:44:26.941293    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:26.941356    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:26.955027    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:26.955053    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:26.955058    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:26.971051    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:26.971063    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:26.984364    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:26.984376    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:26.997640    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:26.997656    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:27.034761    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:27.034784    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:27.072415    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:27.072429    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:27.086072    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:27.086087    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:27.098889    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:27.098900    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:27.111629    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:27.111644    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:27.123930    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:27.123942    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:27.154287    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:27.154306    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:27.167347    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:27.167360    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:27.192396    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:27.192421    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:27.197544    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:27.197553    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:27.212850    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:27.212861    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:29.730202    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:34.732232    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:34.732460    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:34.744601    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:34.744675    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:34.755503    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:34.755566    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:34.765873    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:34.765936    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:34.776631    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:34.776693    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:34.791091    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:34.791152    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:34.803649    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:34.803713    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:34.814127    9066 logs.go:276] 0 containers: []
	W0812 03:44:34.814138    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:34.814193    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:34.824724    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:34.824741    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:34.824746    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:34.860712    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:34.860719    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:34.896306    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:34.896318    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:34.912562    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:34.912574    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:34.928679    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:34.928689    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:34.941070    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:34.941084    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:34.956448    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:34.956460    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:34.967995    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:34.968006    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:34.972350    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:34.972359    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:34.986958    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:34.986969    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:35.005176    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:35.005189    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:35.016479    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:35.016491    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:35.042930    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:35.042939    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:35.056977    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:35.056990    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:35.073213    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:35.073226    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:37.587309    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:42.588475    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:42.588920    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:42.627449    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:42.627566    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:42.648279    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:42.648375    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:42.663842    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:42.663923    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:42.676039    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:42.676101    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:42.694319    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:42.694383    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:42.705272    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:42.705341    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:42.719501    9066 logs.go:276] 0 containers: []
	W0812 03:44:42.719511    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:42.719566    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:42.730592    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:42.730612    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:42.730621    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:42.765312    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:42.765323    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:42.777371    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:42.777385    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:42.795291    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:42.795301    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:42.828737    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:42.828743    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:42.843106    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:42.843116    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:42.861828    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:42.861839    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:42.874039    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:42.874052    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:42.885762    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:42.885775    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:42.897052    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:42.897065    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:42.909034    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:42.909045    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:42.933300    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:42.933307    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:42.937484    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:42.937493    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:42.956084    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:42.956094    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:42.968084    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:42.968097    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:45.485406    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:50.488136    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:50.488418    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:50.522158    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:50.522281    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:50.538235    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:50.538307    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:50.550683    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:50.550761    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:50.561621    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:50.561687    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:50.572118    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:50.572179    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:50.583092    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:50.583162    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:50.593838    9066 logs.go:276] 0 containers: []
	W0812 03:44:50.593847    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:50.593893    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:50.604909    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:50.604930    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:50.604935    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:50.619320    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:50.619331    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:50.634199    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:50.634236    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:50.649465    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:50.649476    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:50.682419    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:50.682426    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:50.700607    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:50.700620    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:44:50.712011    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:50.712022    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:50.724385    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:50.724399    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:50.743288    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:50.743302    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:50.760839    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:50.760849    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:50.765304    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:50.765309    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:50.800127    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:50.800136    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:50.811800    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:50.811813    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:50.823410    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:50.823424    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:50.847586    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:50.847593    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:53.360993    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:44:58.363158    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:44:58.363522    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:44:58.395132    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:44:58.395247    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:44:58.417119    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:44:58.417205    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:44:58.430802    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:44:58.430897    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:44:58.442020    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:44:58.442090    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:44:58.452259    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:44:58.452321    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:44:58.462933    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:44:58.462988    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:44:58.473247    9066 logs.go:276] 0 containers: []
	W0812 03:44:58.473262    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:44:58.473313    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:44:58.484310    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:44:58.484327    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:44:58.484335    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:44:58.496277    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:44:58.496288    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:44:58.511303    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:44:58.511315    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:44:58.522713    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:44:58.522724    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:44:58.534621    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:44:58.534635    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:44:58.551351    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:44:58.551363    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:44:58.563185    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:44:58.563197    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:44:58.567890    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:44:58.567899    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:44:58.602996    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:44:58.603007    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:44:58.616893    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:44:58.616905    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:44:58.633207    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:44:58.633220    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:44:58.645496    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:44:58.645508    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:44:58.670369    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:44:58.670380    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:44:58.705417    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:44:58.705429    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:44:58.719062    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:44:58.719076    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:45:01.232767    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:45:06.235083    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:45:06.235242    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:45:06.250758    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:45:06.250837    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:45:06.262345    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:45:06.262413    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:45:06.273497    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:45:06.273567    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:45:06.283529    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:45:06.283589    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:45:06.294091    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:45:06.294155    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:45:06.307270    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:45:06.307338    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:45:06.317287    9066 logs.go:276] 0 containers: []
	W0812 03:45:06.317298    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:45:06.317351    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:45:06.328980    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:45:06.328999    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:45:06.329005    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:45:06.340734    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:45:06.340744    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:45:06.352401    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:45:06.352412    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:45:06.369659    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:45:06.369669    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:45:06.381321    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:45:06.381337    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:45:06.385442    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:45:06.385451    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:45:06.398851    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:45:06.398860    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:45:06.410701    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:45:06.410715    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:45:06.422797    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:45:06.422811    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:45:06.437628    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:45:06.437639    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:45:06.449139    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:45:06.449153    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:45:06.483612    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:45:06.483621    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:45:06.518163    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:45:06.518177    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:45:06.532221    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:45:06.532234    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:45:06.544271    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:45:06.544286    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:45:09.070978    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:45:14.073624    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:45:14.074096    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:45:14.118377    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:45:14.118489    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:45:14.159479    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:45:14.159551    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:45:14.173758    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:45:14.173836    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:45:14.184904    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:45:14.184977    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:45:14.196831    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:45:14.196897    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:45:14.207936    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:45:14.207995    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:45:14.218207    9066 logs.go:276] 0 containers: []
	W0812 03:45:14.218221    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:45:14.218277    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:45:14.229043    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:45:14.229059    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:45:14.229065    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:45:14.233251    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:45:14.233260    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:45:14.249916    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:45:14.249932    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:45:14.263538    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:45:14.263555    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:45:14.275580    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:45:14.275598    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:45:14.287398    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:45:14.287410    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:45:14.321087    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:45:14.321102    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:45:14.355911    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:45:14.355924    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:45:14.367605    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:45:14.367617    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:45:14.378560    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:45:14.378571    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:45:14.392573    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:45:14.392586    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:45:14.412826    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:45:14.412835    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:45:14.424508    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:45:14.424522    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:45:14.449070    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:45:14.449077    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:45:14.463009    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:45:14.463020    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:45:16.976917    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:45:21.979127    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:45:21.979565    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:45:22.019382    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:45:22.019509    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:45:22.041157    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:45:22.041271    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:45:22.060870    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:45:22.060950    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:45:22.075188    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:45:22.075253    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:45:22.086121    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:45:22.086186    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:45:22.096771    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:45:22.096843    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:45:22.110527    9066 logs.go:276] 0 containers: []
	W0812 03:45:22.110539    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:45:22.110592    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:45:22.120827    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:45:22.120846    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:45:22.120851    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:45:22.135240    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:45:22.135252    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:45:22.175485    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:45:22.175498    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:45:22.180372    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:45:22.180381    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:45:22.196760    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:45:22.196774    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:45:22.208901    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:45:22.208914    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:45:22.231675    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:45:22.231682    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:45:22.265080    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:45:22.265088    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:45:22.276619    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:45:22.276632    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:45:22.288136    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:45:22.288147    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:45:22.306972    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:45:22.306984    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:45:22.319448    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:45:22.319462    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:45:22.336256    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:45:22.336267    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:45:22.353405    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:45:22.353416    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:45:22.365087    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:45:22.365099    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:45:24.882497    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:45:29.884764    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:45:29.885193    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0812 03:45:29.926539    9066 logs.go:276] 1 containers: [905bc1caf712]
	I0812 03:45:29.926670    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0812 03:45:29.948536    9066 logs.go:276] 1 containers: [bfbe626398fc]
	I0812 03:45:29.948647    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0812 03:45:29.964963    9066 logs.go:276] 4 containers: [600b86fd0c4b 05997bc3d8e7 b92bd2d7e951 4c5e55542ab2]
	I0812 03:45:29.965040    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0812 03:45:29.980258    9066 logs.go:276] 1 containers: [82b3aa847fe7]
	I0812 03:45:29.980321    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0812 03:45:29.992415    9066 logs.go:276] 1 containers: [a59a0c8eb222]
	I0812 03:45:29.992489    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0812 03:45:30.003587    9066 logs.go:276] 1 containers: [13754d953934]
	I0812 03:45:30.003658    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0812 03:45:30.017321    9066 logs.go:276] 0 containers: []
	W0812 03:45:30.017333    9066 logs.go:278] No container was found matching "kindnet"
	I0812 03:45:30.017386    9066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0812 03:45:30.028226    9066 logs.go:276] 1 containers: [9fc97d13acff]
	I0812 03:45:30.028244    9066 logs.go:123] Gathering logs for coredns [05997bc3d8e7] ...
	I0812 03:45:30.028249    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05997bc3d8e7"
	I0812 03:45:30.040175    9066 logs.go:123] Gathering logs for coredns [b92bd2d7e951] ...
	I0812 03:45:30.040188    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b92bd2d7e951"
	I0812 03:45:30.051863    9066 logs.go:123] Gathering logs for coredns [4c5e55542ab2] ...
	I0812 03:45:30.051874    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5e55542ab2"
	I0812 03:45:30.064068    9066 logs.go:123] Gathering logs for kube-scheduler [82b3aa847fe7] ...
	I0812 03:45:30.064080    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82b3aa847fe7"
	I0812 03:45:30.079432    9066 logs.go:123] Gathering logs for storage-provisioner [9fc97d13acff] ...
	I0812 03:45:30.079443    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc97d13acff"
	I0812 03:45:30.095115    9066 logs.go:123] Gathering logs for kubelet ...
	I0812 03:45:30.095125    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 03:45:30.127956    9066 logs.go:123] Gathering logs for kube-apiserver [905bc1caf712] ...
	I0812 03:45:30.127970    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 905bc1caf712"
	I0812 03:45:30.142509    9066 logs.go:123] Gathering logs for etcd [bfbe626398fc] ...
	I0812 03:45:30.142523    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfbe626398fc"
	I0812 03:45:30.156451    9066 logs.go:123] Gathering logs for Docker ...
	I0812 03:45:30.156464    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0812 03:45:30.179392    9066 logs.go:123] Gathering logs for coredns [600b86fd0c4b] ...
	I0812 03:45:30.179399    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 600b86fd0c4b"
	I0812 03:45:30.191313    9066 logs.go:123] Gathering logs for container status ...
	I0812 03:45:30.191324    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 03:45:30.203095    9066 logs.go:123] Gathering logs for dmesg ...
	I0812 03:45:30.203108    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 03:45:30.207930    9066 logs.go:123] Gathering logs for kube-proxy [a59a0c8eb222] ...
	I0812 03:45:30.207939    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59a0c8eb222"
	I0812 03:45:30.220232    9066 logs.go:123] Gathering logs for kube-controller-manager [13754d953934] ...
	I0812 03:45:30.220244    9066 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13754d953934"
	I0812 03:45:30.237176    9066 logs.go:123] Gathering logs for describe nodes ...
	I0812 03:45:30.237186    9066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 03:45:32.778759    9066 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0812 03:45:37.781504    9066 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 03:45:37.788874    9066 out.go:177] 
	W0812 03:45:37.791948    9066 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0812 03:45:37.791957    9066 out.go:239] * 
	* 
	W0812 03:45:37.792634    9066 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:37.806890    9066 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-743000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.28s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.973335834s)

                                                
                                                
-- stdout --
	* [pause-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-449000" primary control-plane node in "pause-449000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-449000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-449000 -n pause-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-449000 -n pause-449000: exit status 7 (64.372125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 : exit status 80 (9.749379792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-971000" primary control-plane node in "NoKubernetes-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000: exit status 7 (47.014916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240385667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-971000
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000: exit status 7 (54.83225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 : exit status 80 (5.226182375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-971000
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000: exit status 7 (29.128792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 : exit status 80 (5.261271625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-971000
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-971000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-971000 -n NoKubernetes-971000: exit status 7 (47.532833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.832946167s)

                                                
                                                
-- stdout --
	* [auto-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-487000" primary control-plane node in "auto-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:43:40.963988    9247 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:43:40.964124    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:43:40.964127    9247 out.go:304] Setting ErrFile to fd 2...
	I0812 03:43:40.964132    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:43:40.964273    9247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:43:40.965352    9247 out.go:298] Setting JSON to false
	I0812 03:43:40.981620    9247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6190,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:43:40.981694    9247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:43:40.985657    9247 out.go:177] * [auto-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:43:40.993811    9247 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:43:40.993891    9247 notify.go:220] Checking for updates...
	I0812 03:43:41.000581    9247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:43:41.003696    9247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:43:41.006660    9247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:43:41.009625    9247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:43:41.012629    9247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:43:41.015842    9247 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:43:41.015914    9247 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:43:41.015965    9247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:43:41.019601    9247 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:43:41.025607    9247 start.go:297] selected driver: qemu2
	I0812 03:43:41.025619    9247 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:43:41.025627    9247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:43:41.027874    9247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:43:41.030654    9247 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:43:41.033705    9247 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:43:41.033724    9247 cni.go:84] Creating CNI manager for ""
	I0812 03:43:41.033729    9247 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:43:41.033732    9247 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:43:41.033759    9247 start.go:340] cluster config:
	{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:43:41.037257    9247 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:43:41.044590    9247 out.go:177] * Starting "auto-487000" primary control-plane node in "auto-487000" cluster
	I0812 03:43:41.048651    9247 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:43:41.048669    9247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:43:41.048677    9247 cache.go:56] Caching tarball of preloaded images
	I0812 03:43:41.048753    9247 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:43:41.048759    9247 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:43:41.048824    9247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/auto-487000/config.json ...
	I0812 03:43:41.048840    9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/auto-487000/config.json: {Name:mk3c0980d14135bce51aea98212515104a04a0d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:43:41.049220    9247 start.go:360] acquireMachinesLock for auto-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:43:41.049253    9247 start.go:364] duration metric: took 27.333µs to acquireMachinesLock for "auto-487000"
	I0812 03:43:41.049265    9247 start.go:93] Provisioning new machine with config: &{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:43:41.049291    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:43:41.057700    9247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:43:41.074701    9247 start.go:159] libmachine.API.Create for "auto-487000" (driver="qemu2")
	I0812 03:43:41.074736    9247 client.go:168] LocalClient.Create starting
	I0812 03:43:41.074802    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:43:41.074833    9247 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:41.074846    9247 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:41.074889    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:43:41.074913    9247 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:41.074921    9247 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:41.075408    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:43:41.229600    9247 main.go:141] libmachine: Creating SSH key...
	I0812 03:43:41.390817    9247 main.go:141] libmachine: Creating Disk image...
	I0812 03:43:41.390825    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:43:41.391052    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:41.401161    9247 main.go:141] libmachine: STDOUT: 
	I0812 03:43:41.401193    9247 main.go:141] libmachine: STDERR: 
	I0812 03:43:41.401249    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2 +20000M
	I0812 03:43:41.409799    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:43:41.409817    9247 main.go:141] libmachine: STDERR: 
	I0812 03:43:41.409835    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:41.409840    9247 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:43:41.409858    9247 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:43:41.409889    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ec:7b:d1:ab:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:41.411642    9247 main.go:141] libmachine: STDOUT: 
	I0812 03:43:41.411658    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:43:41.411676    9247 client.go:171] duration metric: took 336.939ms to LocalClient.Create
	I0812 03:43:43.413869    9247 start.go:128] duration metric: took 2.364575666s to createHost
	I0812 03:43:43.413937    9247 start.go:83] releasing machines lock for "auto-487000", held for 2.364706833s
	W0812 03:43:43.414017    9247 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:43:43.431434    9247 out.go:177] * Deleting "auto-487000" in qemu2 ...
	W0812 03:43:43.452323    9247 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:43:43.452354    9247 start.go:729] Will try again in 5 seconds ...
	I0812 03:43:48.454433    9247 start.go:360] acquireMachinesLock for auto-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:43:48.454671    9247 start.go:364] duration metric: took 194.209µs to acquireMachinesLock for "auto-487000"
	I0812 03:43:48.454709    9247 start.go:93] Provisioning new machine with config: &{Name:auto-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:43:48.454854    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:43:48.463215    9247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:43:48.492269    9247 start.go:159] libmachine.API.Create for "auto-487000" (driver="qemu2")
	I0812 03:43:48.492305    9247 client.go:168] LocalClient.Create starting
	I0812 03:43:48.492395    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:43:48.492461    9247 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:48.492475    9247 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:48.492525    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:43:48.492560    9247 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:48.492571    9247 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:48.492941    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:43:48.644877    9247 main.go:141] libmachine: Creating SSH key...
	I0812 03:43:48.704330    9247 main.go:141] libmachine: Creating Disk image...
	I0812 03:43:48.704342    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:43:48.704583    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:48.714661    9247 main.go:141] libmachine: STDOUT: 
	I0812 03:43:48.714687    9247 main.go:141] libmachine: STDERR: 
	I0812 03:43:48.714744    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2 +20000M
	I0812 03:43:48.723025    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:43:48.723042    9247 main.go:141] libmachine: STDERR: 
	I0812 03:43:48.723055    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:48.723060    9247 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:43:48.723069    9247 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:43:48.723092    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:47:a5:ea:54:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/auto-487000/disk.qcow2
	I0812 03:43:48.724764    9247 main.go:141] libmachine: STDOUT: 
	I0812 03:43:48.724778    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:43:48.724791    9247 client.go:171] duration metric: took 232.484833ms to LocalClient.Create
	I0812 03:43:50.726853    9247 start.go:128] duration metric: took 2.272016791s to createHost
	I0812 03:43:50.726886    9247 start.go:83] releasing machines lock for "auto-487000", held for 2.272233833s
	W0812 03:43:50.727044    9247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:43:50.739288    9247 out.go:177] 
	W0812 03:43:50.742364    9247 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:43:50.742384    9247 out.go:239] * 
	* 
	W0812 03:43:50.743295    9247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:43:50.760298    9247 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.744124833s)

                                                
                                                
-- stdout --
	* [kindnet-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-487000" primary control-plane node in "kindnet-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:43:52.940744    9364 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:43:52.940862    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:43:52.940866    9364 out.go:304] Setting ErrFile to fd 2...
	I0812 03:43:52.940868    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:43:52.940997    9364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:43:52.942049    9364 out.go:298] Setting JSON to false
	I0812 03:43:52.958435    9364 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6202,"bootTime":1723453230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:43:52.958509    9364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:43:52.963661    9364 out.go:177] * [kindnet-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:43:52.970656    9364 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:43:52.970710    9364 notify.go:220] Checking for updates...
	I0812 03:43:52.977613    9364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:43:52.980663    9364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:43:52.983628    9364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:43:52.986632    9364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:43:52.989676    9364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:43:52.992863    9364 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:43:52.992935    9364 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:43:52.992992    9364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:43:52.997582    9364 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:43:53.003570    9364 start.go:297] selected driver: qemu2
	I0812 03:43:53.003576    9364 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:43:53.003581    9364 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:43:53.005866    9364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:43:53.008666    9364 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:43:53.011695    9364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:43:53.011727    9364 cni.go:84] Creating CNI manager for "kindnet"
	I0812 03:43:53.011730    9364 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 03:43:53.011768    9364 start.go:340] cluster config:
	{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:43:53.015528    9364 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:43:53.023582    9364 out.go:177] * Starting "kindnet-487000" primary control-plane node in "kindnet-487000" cluster
	I0812 03:43:53.027650    9364 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:43:53.027663    9364 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:43:53.027669    9364 cache.go:56] Caching tarball of preloaded images
	I0812 03:43:53.027724    9364 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:43:53.027729    9364 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:43:53.027785    9364 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kindnet-487000/config.json ...
	I0812 03:43:53.027796    9364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kindnet-487000/config.json: {Name:mkd05dd3b627e7ba99cbdb169d4b34b04ea645c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:43:53.028181    9364 start.go:360] acquireMachinesLock for kindnet-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:43:53.028215    9364 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "kindnet-487000"
	I0812 03:43:53.028227    9364 start.go:93] Provisioning new machine with config: &{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:43:53.028260    9364 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:43:53.036595    9364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:43:53.054521    9364 start.go:159] libmachine.API.Create for "kindnet-487000" (driver="qemu2")
	I0812 03:43:53.054546    9364 client.go:168] LocalClient.Create starting
	I0812 03:43:53.054610    9364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:43:53.054639    9364 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:53.054649    9364 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:53.054690    9364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:43:53.054715    9364 main.go:141] libmachine: Decoding PEM data...
	I0812 03:43:53.054721    9364 main.go:141] libmachine: Parsing certificate...
	I0812 03:43:53.055181    9364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:43:53.205505    9364 main.go:141] libmachine: Creating SSH key...
	I0812 03:43:53.305143    9364 main.go:141] libmachine: Creating Disk image...
	I0812 03:43:53.305149    9364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:43:53.305346    9364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:43:53.314745    9364 main.go:141] libmachine: STDOUT: 
	I0812 03:43:53.314772    9364 main.go:141] libmachine: STDERR: 
	I0812 03:43:53.314829    9364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2 +20000M
	I0812 03:43:53.322906    9364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:43:53.322923    9364 main.go:141] libmachine: STDERR: 
	I0812 03:43:53.322943    9364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:43:53.322948    9364 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:43:53.322967    9364 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:43:53.322994    9364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5a:e5:b8:f6:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:43:53.324621    9364 main.go:141] libmachine: STDOUT: 
	I0812 03:43:53.324638    9364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:43:53.324656    9364 client.go:171] duration metric: took 270.1085ms to LocalClient.Create
	I0812 03:43:55.325743    9364 start.go:128] duration metric: took 2.297506709s to createHost
	I0812 03:43:55.325768    9364 start.go:83] releasing machines lock for "kindnet-487000", held for 2.297580042s
	W0812 03:43:55.325789    9364 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:43:55.334534    9364 out.go:177] * Deleting "kindnet-487000" in qemu2 ...
	W0812 03:43:55.348133    9364 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:43:55.348149    9364 start.go:729] Will try again in 5 seconds ...
	I0812 03:44:00.350367    9364 start.go:360] acquireMachinesLock for kindnet-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:00.350762    9364 start.go:364] duration metric: took 300.666µs to acquireMachinesLock for "kindnet-487000"
	I0812 03:44:00.350886    9364 start.go:93] Provisioning new machine with config: &{Name:kindnet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:00.351121    9364 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:00.357808    9364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:00.400356    9364 start.go:159] libmachine.API.Create for "kindnet-487000" (driver="qemu2")
	I0812 03:44:00.400413    9364 client.go:168] LocalClient.Create starting
	I0812 03:44:00.400542    9364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:00.400599    9364 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:00.400612    9364 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:00.400682    9364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:00.400721    9364 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:00.400734    9364 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:00.401214    9364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:00.559985    9364 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:00.597272    9364 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:00.597284    9364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:00.597488    9364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:44:00.606731    9364 main.go:141] libmachine: STDOUT: 
	I0812 03:44:00.606752    9364 main.go:141] libmachine: STDERR: 
	I0812 03:44:00.606805    9364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2 +20000M
	I0812 03:44:00.614787    9364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:00.614808    9364 main.go:141] libmachine: STDERR: 
	I0812 03:44:00.614819    9364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:44:00.614825    9364 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:00.614834    9364 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:00.614884    9364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ce:02:13:25:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kindnet-487000/disk.qcow2
	I0812 03:44:00.616507    9364 main.go:141] libmachine: STDOUT: 
	I0812 03:44:00.616526    9364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:00.616538    9364 client.go:171] duration metric: took 216.123834ms to LocalClient.Create
	I0812 03:44:02.618619    9364 start.go:128] duration metric: took 2.267502542s to createHost
	I0812 03:44:02.618679    9364 start.go:83] releasing machines lock for "kindnet-487000", held for 2.267933416s
	W0812 03:44:02.618877    9364 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:02.631902    9364 out.go:177] 
	W0812 03:44:02.634886    9364 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:44:02.634907    9364 out.go:239] * 
	* 
	W0812 03:44:02.635875    9364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:44:02.648872    9364 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.877767208s)

                                                
                                                
-- stdout --
	* [calico-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-487000" primary control-plane node in "calico-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:44:04.932583    9479 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:44:04.932707    9479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:04.932711    9479 out.go:304] Setting ErrFile to fd 2...
	I0812 03:44:04.932714    9479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:04.932854    9479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:44:04.933908    9479 out.go:298] Setting JSON to false
	I0812 03:44:04.950223    9479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6214,"bootTime":1723453230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:44:04.950300    9479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:44:04.955355    9479 out.go:177] * [calico-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:44:04.963435    9479 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:44:04.963538    9479 notify.go:220] Checking for updates...
	I0812 03:44:04.970298    9479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:44:04.973331    9479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:44:04.976274    9479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:44:04.979331    9479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:44:04.982361    9479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:44:04.985581    9479 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:44:04.985650    9479 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:44:04.985697    9479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:44:04.989316    9479 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:44:04.995324    9479 start.go:297] selected driver: qemu2
	I0812 03:44:04.995331    9479 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:44:04.995336    9479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:44:04.997441    9479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:44:05.000311    9479 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:44:05.003401    9479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:44:05.003423    9479 cni.go:84] Creating CNI manager for "calico"
	I0812 03:44:05.003426    9479 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0812 03:44:05.003450    9479 start.go:340] cluster config:
	{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:44:05.006939    9479 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:44:05.014359    9479 out.go:177] * Starting "calico-487000" primary control-plane node in "calico-487000" cluster
	I0812 03:44:05.018376    9479 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:44:05.018395    9479 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:44:05.018405    9479 cache.go:56] Caching tarball of preloaded images
	I0812 03:44:05.018469    9479 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:44:05.018475    9479 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:44:05.018554    9479 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/calico-487000/config.json ...
	I0812 03:44:05.018566    9479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/calico-487000/config.json: {Name:mkba0178e6f401d01ffdf5a69e572a4d37dec457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:44:05.018945    9479 start.go:360] acquireMachinesLock for calico-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:05.018981    9479 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "calico-487000"
	I0812 03:44:05.018993    9479 start.go:93] Provisioning new machine with config: &{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:05.019020    9479 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:05.027370    9479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:05.043440    9479 start.go:159] libmachine.API.Create for "calico-487000" (driver="qemu2")
	I0812 03:44:05.043476    9479 client.go:168] LocalClient.Create starting
	I0812 03:44:05.043537    9479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:05.043569    9479 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:05.043581    9479 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:05.043618    9479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:05.043640    9479 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:05.043649    9479 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:05.044133    9479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:05.193952    9479 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:05.344133    9479 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:05.344141    9479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:05.344369    9479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:05.353944    9479 main.go:141] libmachine: STDOUT: 
	I0812 03:44:05.353962    9479 main.go:141] libmachine: STDERR: 
	I0812 03:44:05.354017    9479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2 +20000M
	I0812 03:44:05.362392    9479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:05.362430    9479 main.go:141] libmachine: STDERR: 
	I0812 03:44:05.362445    9479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:05.362452    9479 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:05.362463    9479 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:05.362488    9479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9e:13:25:f3:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:05.364156    9479 main.go:141] libmachine: STDOUT: 
	I0812 03:44:05.364172    9479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:05.364192    9479 client.go:171] duration metric: took 320.714584ms to LocalClient.Create
	I0812 03:44:07.366369    9479 start.go:128] duration metric: took 2.347350875s to createHost
	I0812 03:44:07.366434    9479 start.go:83] releasing machines lock for "calico-487000", held for 2.347476667s
	W0812 03:44:07.366535    9479 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:07.373832    9479 out.go:177] * Deleting "calico-487000" in qemu2 ...
	W0812 03:44:07.404464    9479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:07.404507    9479 start.go:729] Will try again in 5 seconds ...
	I0812 03:44:12.406703    9479 start.go:360] acquireMachinesLock for calico-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:12.407194    9479 start.go:364] duration metric: took 394.458µs to acquireMachinesLock for "calico-487000"
	I0812 03:44:12.407258    9479 start.go:93] Provisioning new machine with config: &{Name:calico-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:12.407660    9479 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:12.412121    9479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:12.461983    9479 start.go:159] libmachine.API.Create for "calico-487000" (driver="qemu2")
	I0812 03:44:12.462033    9479 client.go:168] LocalClient.Create starting
	I0812 03:44:12.462156    9479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:12.462227    9479 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:12.462244    9479 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:12.462304    9479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:12.462348    9479 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:12.462362    9479 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:12.462990    9479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:12.623291    9479 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:12.720874    9479 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:12.720883    9479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:12.721119    9479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:12.730327    9479 main.go:141] libmachine: STDOUT: 
	I0812 03:44:12.730345    9479 main.go:141] libmachine: STDERR: 
	I0812 03:44:12.730394    9479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2 +20000M
	I0812 03:44:12.738518    9479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:12.738533    9479 main.go:141] libmachine: STDERR: 
	I0812 03:44:12.738543    9479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:12.738548    9479 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:12.738560    9479 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:12.738603    9479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:8d:04:ad:85:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/calico-487000/disk.qcow2
	I0812 03:44:12.740240    9479 main.go:141] libmachine: STDOUT: 
	I0812 03:44:12.740254    9479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:12.740269    9479 client.go:171] duration metric: took 278.23275ms to LocalClient.Create
	I0812 03:44:14.742330    9479 start.go:128] duration metric: took 2.33467625s to createHost
	I0812 03:44:14.742359    9479 start.go:83] releasing machines lock for "calico-487000", held for 2.335177917s
	W0812 03:44:14.742530    9479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:14.757796    9479 out.go:177] 
	W0812 03:44:14.760742    9479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:44:14.760747    9479 out.go:239] * 
	* 
	W0812 03:44:14.761239    9479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:44:14.771715    9479 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.782301708s)

                                                
                                                
-- stdout --
	* [custom-flannel-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-487000" primary control-plane node in "custom-flannel-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:44:17.144469    9598 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:44:17.144611    9598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:17.144614    9598 out.go:304] Setting ErrFile to fd 2...
	I0812 03:44:17.144616    9598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:17.144746    9598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:44:17.145844    9598 out.go:298] Setting JSON to false
	I0812 03:44:17.162290    9598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6227,"bootTime":1723453230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:44:17.162370    9598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:44:17.168039    9598 out.go:177] * [custom-flannel-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:44:17.175057    9598 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:44:17.175150    9598 notify.go:220] Checking for updates...
	I0812 03:44:17.182060    9598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:44:17.185018    9598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:44:17.188000    9598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:44:17.190986    9598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:44:17.192509    9598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:44:17.196384    9598 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:44:17.196452    9598 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:44:17.196501    9598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:44:17.200998    9598 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:44:17.206048    9598 start.go:297] selected driver: qemu2
	I0812 03:44:17.206054    9598 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:44:17.206069    9598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:44:17.208386    9598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:44:17.210988    9598 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:44:17.214091    9598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:44:17.214106    9598 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0812 03:44:17.214113    9598 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0812 03:44:17.214148    9598 start.go:340] cluster config:
	{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:44:17.217751    9598 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:44:17.225068    9598 out.go:177] * Starting "custom-flannel-487000" primary control-plane node in "custom-flannel-487000" cluster
	I0812 03:44:17.229029    9598 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:44:17.229045    9598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:44:17.229054    9598 cache.go:56] Caching tarball of preloaded images
	I0812 03:44:17.229116    9598 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:44:17.229123    9598 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:44:17.229194    9598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/custom-flannel-487000/config.json ...
	I0812 03:44:17.229211    9598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/custom-flannel-487000/config.json: {Name:mk983cb82167938e385baa4fd200661527dc056f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:44:17.229582    9598 start.go:360] acquireMachinesLock for custom-flannel-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:17.229615    9598 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "custom-flannel-487000"
	I0812 03:44:17.229628    9598 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:17.229668    9598 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:17.236972    9598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:17.253851    9598 start.go:159] libmachine.API.Create for "custom-flannel-487000" (driver="qemu2")
	I0812 03:44:17.253879    9598 client.go:168] LocalClient.Create starting
	I0812 03:44:17.253950    9598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:17.253980    9598 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:17.253996    9598 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:17.254036    9598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:17.254066    9598 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:17.254072    9598 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:17.254464    9598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:17.406734    9598 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:17.558690    9598 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:17.558698    9598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:17.558946    9598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:17.568623    9598 main.go:141] libmachine: STDOUT: 
	I0812 03:44:17.568645    9598 main.go:141] libmachine: STDERR: 
	I0812 03:44:17.568695    9598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2 +20000M
	I0812 03:44:17.576951    9598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:17.576967    9598 main.go:141] libmachine: STDERR: 
	I0812 03:44:17.576983    9598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:17.576988    9598 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:17.577005    9598 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:17.577028    9598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2b:c9:8a:ea:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:17.578740    9598 main.go:141] libmachine: STDOUT: 
	I0812 03:44:17.578754    9598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:17.578770    9598 client.go:171] duration metric: took 324.892375ms to LocalClient.Create
	I0812 03:44:19.580833    9598 start.go:128] duration metric: took 2.351186916s to createHost
	I0812 03:44:19.580863    9598 start.go:83] releasing machines lock for "custom-flannel-487000", held for 2.351275291s
	W0812 03:44:19.580901    9598 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:19.592491    9598 out.go:177] * Deleting "custom-flannel-487000" in qemu2 ...
	W0812 03:44:19.603274    9598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:19.603282    9598 start.go:729] Will try again in 5 seconds ...
	I0812 03:44:24.605364    9598 start.go:360] acquireMachinesLock for custom-flannel-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:24.605596    9598 start.go:364] duration metric: took 172.625µs to acquireMachinesLock for "custom-flannel-487000"
	I0812 03:44:24.605631    9598 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:24.605713    9598 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:24.613033    9598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:24.634527    9598 start.go:159] libmachine.API.Create for "custom-flannel-487000" (driver="qemu2")
	I0812 03:44:24.634574    9598 client.go:168] LocalClient.Create starting
	I0812 03:44:24.634645    9598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:24.634680    9598 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:24.634691    9598 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:24.634732    9598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:24.634758    9598 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:24.634767    9598 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:24.635103    9598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:24.784820    9598 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:24.846529    9598 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:24.846539    9598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:24.846737    9598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:24.855870    9598 main.go:141] libmachine: STDOUT: 
	I0812 03:44:24.855889    9598 main.go:141] libmachine: STDERR: 
	I0812 03:44:24.855945    9598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2 +20000M
	I0812 03:44:24.864160    9598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:24.864175    9598 main.go:141] libmachine: STDERR: 
	I0812 03:44:24.864188    9598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:24.864194    9598 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:24.864206    9598 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:24.864241    9598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:b2:e2:6b:aa:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/custom-flannel-487000/disk.qcow2
	I0812 03:44:24.866007    9598 main.go:141] libmachine: STDOUT: 
	I0812 03:44:24.866024    9598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:24.866035    9598 client.go:171] duration metric: took 231.460833ms to LocalClient.Create
	I0812 03:44:26.867012    9598 start.go:128] duration metric: took 2.261317334s to createHost
	I0812 03:44:26.867029    9598 start.go:83] releasing machines lock for "custom-flannel-487000", held for 2.261455375s
	W0812 03:44:26.867213    9598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:26.878084    9598 out.go:177] 
	W0812 03:44:26.882048    9598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:44:26.882071    9598 out.go:239] * 
	* 
	W0812 03:44:26.882714    9598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:44:26.888917    9598 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.853298s)

                                                
                                                
-- stdout --
	* [false-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-487000" primary control-plane node in "false-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:44:29.258337    9716 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:44:29.258464    9716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:29.258471    9716 out.go:304] Setting ErrFile to fd 2...
	I0812 03:44:29.258481    9716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:29.258622    9716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:44:29.259696    9716 out.go:298] Setting JSON to false
	I0812 03:44:29.275779    9716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6239,"bootTime":1723453230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:44:29.275869    9716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:44:29.282372    9716 out.go:177] * [false-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:44:29.290522    9716 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:44:29.290565    9716 notify.go:220] Checking for updates...
	I0812 03:44:29.297428    9716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:44:29.300503    9716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:44:29.303465    9716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:44:29.306471    9716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:44:29.309503    9716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:44:29.311459    9716 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:44:29.311529    9716 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:44:29.311583    9716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:44:29.315466    9716 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:44:29.322325    9716 start.go:297] selected driver: qemu2
	I0812 03:44:29.322333    9716 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:44:29.322339    9716 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:44:29.324551    9716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:44:29.327441    9716 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:44:29.330596    9716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:44:29.330640    9716 cni.go:84] Creating CNI manager for "false"
	I0812 03:44:29.330679    9716 start.go:340] cluster config:
	{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:44:29.334193    9716 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:44:29.341489    9716 out.go:177] * Starting "false-487000" primary control-plane node in "false-487000" cluster
	I0812 03:44:29.345536    9716 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:44:29.345548    9716 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:44:29.345553    9716 cache.go:56] Caching tarball of preloaded images
	I0812 03:44:29.345607    9716 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:44:29.345611    9716 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:44:29.345665    9716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/false-487000/config.json ...
	I0812 03:44:29.345675    9716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/false-487000/config.json: {Name:mkc890e9dfde86820b90698d0330e0cc8f6e75bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:44:29.346037    9716 start.go:360] acquireMachinesLock for false-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:29.346069    9716 start.go:364] duration metric: took 26.666µs to acquireMachinesLock for "false-487000"
	I0812 03:44:29.346081    9716 start.go:93] Provisioning new machine with config: &{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:29.346105    9716 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:29.350502    9716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:29.367075    9716 start.go:159] libmachine.API.Create for "false-487000" (driver="qemu2")
	I0812 03:44:29.367108    9716 client.go:168] LocalClient.Create starting
	I0812 03:44:29.367181    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:29.367210    9716 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:29.367221    9716 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:29.367258    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:29.367284    9716 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:29.367297    9716 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:29.367781    9716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:29.518753    9716 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:29.676974    9716 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:29.676982    9716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:29.677206    9716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:29.686823    9716 main.go:141] libmachine: STDOUT: 
	I0812 03:44:29.686842    9716 main.go:141] libmachine: STDERR: 
	I0812 03:44:29.686902    9716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2 +20000M
	I0812 03:44:29.695150    9716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:29.695175    9716 main.go:141] libmachine: STDERR: 
	I0812 03:44:29.695196    9716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:29.695201    9716 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:29.695212    9716 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:29.695241    9716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a9:d7:a3:0e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:29.697011    9716 main.go:141] libmachine: STDOUT: 
	I0812 03:44:29.697025    9716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:29.697041    9716 client.go:171] duration metric: took 329.932792ms to LocalClient.Create
	I0812 03:44:31.699144    9716 start.go:128] duration metric: took 2.3530505s to createHost
	I0812 03:44:31.699182    9716 start.go:83] releasing machines lock for "false-487000", held for 2.3531395s
	W0812 03:44:31.699233    9716 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:31.704281    9716 out.go:177] * Deleting "false-487000" in qemu2 ...
	W0812 03:44:31.727586    9716 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:31.727600    9716 start.go:729] Will try again in 5 seconds ...
	I0812 03:44:36.729698    9716 start.go:360] acquireMachinesLock for false-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:36.729948    9716 start.go:364] duration metric: took 195.083µs to acquireMachinesLock for "false-487000"
	I0812 03:44:36.730009    9716 start.go:93] Provisioning new machine with config: &{Name:false-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:36.730120    9716 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:36.738434    9716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:36.758469    9716 start.go:159] libmachine.API.Create for "false-487000" (driver="qemu2")
	I0812 03:44:36.758501    9716 client.go:168] LocalClient.Create starting
	I0812 03:44:36.758583    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:36.758622    9716 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:36.758631    9716 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:36.758671    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:36.758695    9716 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:36.758703    9716 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:36.759101    9716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:36.909696    9716 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:37.018067    9716 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:37.018074    9716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:37.018297    9716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:37.027797    9716 main.go:141] libmachine: STDOUT: 
	I0812 03:44:37.027816    9716 main.go:141] libmachine: STDERR: 
	I0812 03:44:37.027873    9716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2 +20000M
	I0812 03:44:37.035813    9716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:37.035829    9716 main.go:141] libmachine: STDERR: 
	I0812 03:44:37.035840    9716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:37.035844    9716 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:37.035854    9716 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:37.035886    9716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:df:ab:40:fd:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/false-487000/disk.qcow2
	I0812 03:44:37.037515    9716 main.go:141] libmachine: STDOUT: 
	I0812 03:44:37.037530    9716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:37.037541    9716 client.go:171] duration metric: took 279.041083ms to LocalClient.Create
	I0812 03:44:39.039714    9716 start.go:128] duration metric: took 2.309597792s to createHost
	I0812 03:44:39.039794    9716 start.go:83] releasing machines lock for "false-487000", held for 2.309864667s
	W0812 03:44:39.040173    9716 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:39.056009    9716 out.go:177] 
	W0812 03:44:39.060020    9716 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:44:39.060046    9716 out.go:239] * 
	* 
	W0812 03:44:39.062676    9716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:44:39.070932    9716 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.837881834s)

                                                
                                                
-- stdout --
	* [enable-default-cni-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-487000" primary control-plane node in "enable-default-cni-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:44:41.201820    9825 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:44:41.201971    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:41.201975    9825 out.go:304] Setting ErrFile to fd 2...
	I0812 03:44:41.201982    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:41.202105    9825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:44:41.203303    9825 out.go:298] Setting JSON to false
	I0812 03:44:41.219495    9825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6251,"bootTime":1723453230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:44:41.219634    9825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:44:41.223401    9825 out.go:177] * [enable-default-cni-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:44:41.230209    9825 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:44:41.230274    9825 notify.go:220] Checking for updates...
	I0812 03:44:41.237126    9825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:44:41.240142    9825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:44:41.243001    9825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:44:41.246115    9825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:44:41.249154    9825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:44:41.250793    9825 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:44:41.250854    9825 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:44:41.250922    9825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:44:41.255123    9825 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:44:41.261994    9825 start.go:297] selected driver: qemu2
	I0812 03:44:41.262000    9825 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:44:41.262005    9825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:44:41.264110    9825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:44:41.267149    9825 out.go:177] * Automatically selected the socket_vmnet network
	E0812 03:44:41.270210    9825 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0812 03:44:41.270225    9825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:44:41.270245    9825 cni.go:84] Creating CNI manager for "bridge"
	I0812 03:44:41.270254    9825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:44:41.270301    9825 start.go:340] cluster config:
	{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:44:41.273739    9825 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:44:41.281091    9825 out.go:177] * Starting "enable-default-cni-487000" primary control-plane node in "enable-default-cni-487000" cluster
	I0812 03:44:41.285153    9825 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:44:41.285169    9825 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:44:41.285176    9825 cache.go:56] Caching tarball of preloaded images
	I0812 03:44:41.285232    9825 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:44:41.285237    9825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:44:41.285311    9825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/enable-default-cni-487000/config.json ...
	I0812 03:44:41.285323    9825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/enable-default-cni-487000/config.json: {Name:mkb460cb0413fd3f513215df2e91a54797ea34be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:44:41.285539    9825 start.go:360] acquireMachinesLock for enable-default-cni-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:41.285570    9825 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "enable-default-cni-487000"
	I0812 03:44:41.285583    9825 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:41.285613    9825 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:41.294162    9825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:41.310072    9825 start.go:159] libmachine.API.Create for "enable-default-cni-487000" (driver="qemu2")
	I0812 03:44:41.310103    9825 client.go:168] LocalClient.Create starting
	I0812 03:44:41.310166    9825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:41.310196    9825 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:41.310208    9825 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:41.310244    9825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:41.310266    9825 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:41.310272    9825 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:41.310635    9825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:41.461561    9825 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:41.541030    9825 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:41.541035    9825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:41.541243    9825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:41.550661    9825 main.go:141] libmachine: STDOUT: 
	I0812 03:44:41.550679    9825 main.go:141] libmachine: STDERR: 
	I0812 03:44:41.550740    9825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2 +20000M
	I0812 03:44:41.558761    9825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:41.558774    9825 main.go:141] libmachine: STDERR: 
	I0812 03:44:41.558785    9825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:41.558790    9825 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:41.558812    9825 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:41.558841    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:82:bd:02:a9:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:41.560443    9825 main.go:141] libmachine: STDOUT: 
	I0812 03:44:41.560458    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:41.560475    9825 client.go:171] duration metric: took 250.3715ms to LocalClient.Create
	I0812 03:44:43.562656    9825 start.go:128] duration metric: took 2.277045084s to createHost
	I0812 03:44:43.562735    9825 start.go:83] releasing machines lock for "enable-default-cni-487000", held for 2.277186s
	W0812 03:44:43.562885    9825 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:43.574384    9825 out.go:177] * Deleting "enable-default-cni-487000" in qemu2 ...
	W0812 03:44:43.604162    9825 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:43.604194    9825 start.go:729] Will try again in 5 seconds ...
	I0812 03:44:48.606312    9825 start.go:360] acquireMachinesLock for enable-default-cni-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:48.606709    9825 start.go:364] duration metric: took 284.375µs to acquireMachinesLock for "enable-default-cni-487000"
	I0812 03:44:48.606758    9825 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:48.606936    9825 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:48.611524    9825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:48.651702    9825 start.go:159] libmachine.API.Create for "enable-default-cni-487000" (driver="qemu2")
	I0812 03:44:48.651752    9825 client.go:168] LocalClient.Create starting
	I0812 03:44:48.651861    9825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:48.651927    9825 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:48.651943    9825 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:48.652008    9825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:48.652047    9825 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:48.652057    9825 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:48.652747    9825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:48.809325    9825 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:48.953750    9825 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:48.953759    9825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:48.953974    9825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:48.963527    9825 main.go:141] libmachine: STDOUT: 
	I0812 03:44:48.963545    9825 main.go:141] libmachine: STDERR: 
	I0812 03:44:48.963599    9825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2 +20000M
	I0812 03:44:48.971669    9825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:48.971683    9825 main.go:141] libmachine: STDERR: 
	I0812 03:44:48.971695    9825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:48.971700    9825 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:48.971708    9825 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:48.971738    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:43:66:f9:78:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/enable-default-cni-487000/disk.qcow2
	I0812 03:44:48.973395    9825 main.go:141] libmachine: STDOUT: 
	I0812 03:44:48.973416    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:48.973427    9825 client.go:171] duration metric: took 321.674625ms to LocalClient.Create
	I0812 03:44:50.975534    9825 start.go:128] duration metric: took 2.368614958s to createHost
	I0812 03:44:50.975565    9825 start.go:83] releasing machines lock for "enable-default-cni-487000", held for 2.368873541s
	W0812 03:44:50.975689    9825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:50.988900    9825 out.go:177] 
	W0812 03:44:50.992883    9825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:44:50.992889    9825 out.go:239] * 
	* 
	W0812 03:44:50.993375    9825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:44:51.002817    9825 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.828280292s)

                                                
                                                
-- stdout --
	* [flannel-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-487000" primary control-plane node in "flannel-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:44:53.093719    9934 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:44:53.093843    9934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:53.093846    9934 out.go:304] Setting ErrFile to fd 2...
	I0812 03:44:53.093849    9934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:44:53.093980    9934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:44:53.095116    9934 out.go:298] Setting JSON to false
	I0812 03:44:53.111712    9934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6263,"bootTime":1723453230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:44:53.111785    9934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:44:53.118242    9934 out.go:177] * [flannel-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:44:53.126160    9934 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:44:53.126239    9934 notify.go:220] Checking for updates...
	I0812 03:44:53.132151    9934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:44:53.135146    9934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:44:53.136649    9934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:44:53.139175    9934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:44:53.142151    9934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:44:53.145570    9934 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:44:53.145631    9934 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:44:53.145670    9934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:44:53.150121    9934 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:44:53.157187    9934 start.go:297] selected driver: qemu2
	I0812 03:44:53.157194    9934 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:44:53.157206    9934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:44:53.159474    9934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:44:53.162194    9934 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:44:53.165174    9934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:44:53.165207    9934 cni.go:84] Creating CNI manager for "flannel"
	I0812 03:44:53.165211    9934 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0812 03:44:53.165245    9934 start.go:340] cluster config:
	{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:44:53.169028    9934 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:44:53.176202    9934 out.go:177] * Starting "flannel-487000" primary control-plane node in "flannel-487000" cluster
	I0812 03:44:53.180213    9934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:44:53.180230    9934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:44:53.180240    9934 cache.go:56] Caching tarball of preloaded images
	I0812 03:44:53.180297    9934 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:44:53.180303    9934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:44:53.180368    9934 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/flannel-487000/config.json ...
	I0812 03:44:53.180387    9934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/flannel-487000/config.json: {Name:mk3bef8b9c9de22e59f14e6007f3f84d679898b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:44:53.180770    9934 start.go:360] acquireMachinesLock for flannel-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:44:53.180804    9934 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "flannel-487000"
	I0812 03:44:53.180816    9934 start.go:93] Provisioning new machine with config: &{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:44:53.180844    9934 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:44:53.189198    9934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:44:53.204073    9934 start.go:159] libmachine.API.Create for "flannel-487000" (driver="qemu2")
	I0812 03:44:53.204100    9934 client.go:168] LocalClient.Create starting
	I0812 03:44:53.204169    9934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:44:53.204198    9934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:53.204208    9934 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:53.204250    9934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:44:53.204272    9934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:44:53.204279    9934 main.go:141] libmachine: Parsing certificate...
	I0812 03:44:53.204774    9934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:44:53.355767    9934 main.go:141] libmachine: Creating SSH key...
	I0812 03:44:53.448613    9934 main.go:141] libmachine: Creating Disk image...
	I0812 03:44:53.448620    9934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:44:53.448848    9934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:44:53.458342    9934 main.go:141] libmachine: STDOUT: 
	I0812 03:44:53.458365    9934 main.go:141] libmachine: STDERR: 
	I0812 03:44:53.458430    9934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2 +20000M
	I0812 03:44:53.466412    9934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:44:53.466427    9934 main.go:141] libmachine: STDERR: 
	I0812 03:44:53.466452    9934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:44:53.466461    9934 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:44:53.466472    9934 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:44:53.466497    9934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:45:15:99:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:44:53.468139    9934 main.go:141] libmachine: STDOUT: 
	I0812 03:44:53.468158    9934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:44:53.468180    9934 client.go:171] duration metric: took 264.080083ms to LocalClient.Create
	I0812 03:44:55.470368    9934 start.go:128] duration metric: took 2.289523125s to createHost
	I0812 03:44:55.470454    9934 start.go:83] releasing machines lock for "flannel-487000", held for 2.289671875s
	W0812 03:44:55.470555    9934 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:55.486818    9934 out.go:177] * Deleting "flannel-487000" in qemu2 ...
	W0812 03:44:55.514415    9934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:44:55.514447    9934 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:00.516571    9934 start.go:360] acquireMachinesLock for flannel-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:00.517290    9934 start.go:364] duration metric: took 589.5µs to acquireMachinesLock for "flannel-487000"
	I0812 03:45:00.517490    9934 start.go:93] Provisioning new machine with config: &{Name:flannel-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:00.517835    9934 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:00.527424    9934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:45:00.579792    9934 start.go:159] libmachine.API.Create for "flannel-487000" (driver="qemu2")
	I0812 03:45:00.579848    9934 client.go:168] LocalClient.Create starting
	I0812 03:45:00.579976    9934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:00.580051    9934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:00.580065    9934 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:00.580130    9934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:00.580175    9934 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:00.580185    9934 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:00.580758    9934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:00.744709    9934 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:00.831643    9934 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:00.831650    9934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:00.831865    9934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:45:00.841049    9934 main.go:141] libmachine: STDOUT: 
	I0812 03:45:00.841071    9934 main.go:141] libmachine: STDERR: 
	I0812 03:45:00.841118    9934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2 +20000M
	I0812 03:45:00.849000    9934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:00.849018    9934 main.go:141] libmachine: STDERR: 
	I0812 03:45:00.849029    9934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:45:00.849034    9934 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:00.849038    9934 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:00.849083    9934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:6c:5f:94:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/flannel-487000/disk.qcow2
	I0812 03:45:00.850759    9934 main.go:141] libmachine: STDOUT: 
	I0812 03:45:00.850773    9934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:00.850794    9934 client.go:171] duration metric: took 270.9445ms to LocalClient.Create
	I0812 03:45:02.852967    9934 start.go:128] duration metric: took 2.335129166s to createHost
	I0812 03:45:02.853046    9934 start.go:83] releasing machines lock for "flannel-487000", held for 2.335714667s
	W0812 03:45:02.853389    9934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:02.864728    9934 out.go:177] 
	W0812 03:45:02.867704    9934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:02.867735    9934 out.go:239] * 
	* 
	W0812 03:45:02.870433    9934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:02.878623    9934 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.809845875s)

                                                
                                                
-- stdout --
	* [bridge-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-487000" primary control-plane node in "bridge-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:05.215625   10053 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:05.215767   10053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:05.215770   10053 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:05.215773   10053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:05.215905   10053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:05.216992   10053 out.go:298] Setting JSON to false
	I0812 03:45:05.233485   10053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6275,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:45:05.233561   10053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:45:05.238903   10053 out.go:177] * [bridge-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:45:05.246986   10053 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:45:05.247025   10053 notify.go:220] Checking for updates...
	I0812 03:45:05.253950   10053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:45:05.256977   10053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:45:05.259976   10053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:45:05.262907   10053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:45:05.265973   10053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:45:05.269143   10053 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:45:05.269205   10053 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:45:05.269253   10053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:45:05.273976   10053 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:45:05.280918   10053 start.go:297] selected driver: qemu2
	I0812 03:45:05.280929   10053 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:45:05.280936   10053 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:45:05.283169   10053 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:45:05.285876   10053 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:45:05.289040   10053 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:45:05.289060   10053 cni.go:84] Creating CNI manager for "bridge"
	I0812 03:45:05.289065   10053 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:45:05.289107   10053 start.go:340] cluster config:
	{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:05.292678   10053 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:05.299925   10053 out.go:177] * Starting "bridge-487000" primary control-plane node in "bridge-487000" cluster
	I0812 03:45:05.303812   10053 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:45:05.303833   10053 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:45:05.303845   10053 cache.go:56] Caching tarball of preloaded images
	I0812 03:45:05.303918   10053 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:45:05.303923   10053 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:45:05.303993   10053 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/bridge-487000/config.json ...
	I0812 03:45:05.304010   10053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/bridge-487000/config.json: {Name:mk9ad7578c5ceccacfc5ca594e6562f5a35942e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:45:05.304400   10053 start.go:360] acquireMachinesLock for bridge-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:05.304433   10053 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "bridge-487000"
	I0812 03:45:05.304445   10053 start.go:93] Provisioning new machine with config: &{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:05.304492   10053 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:05.312956   10053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:45:05.330585   10053 start.go:159] libmachine.API.Create for "bridge-487000" (driver="qemu2")
	I0812 03:45:05.330616   10053 client.go:168] LocalClient.Create starting
	I0812 03:45:05.330683   10053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:05.330714   10053 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:05.330725   10053 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:05.330764   10053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:05.330787   10053 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:05.330796   10053 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:05.331234   10053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:05.482221   10053 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:05.591780   10053 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:05.591786   10053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:05.591995   10053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:05.601786   10053 main.go:141] libmachine: STDOUT: 
	I0812 03:45:05.601809   10053 main.go:141] libmachine: STDERR: 
	I0812 03:45:05.601865   10053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2 +20000M
	I0812 03:45:05.609791   10053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:05.609807   10053 main.go:141] libmachine: STDERR: 
	I0812 03:45:05.609828   10053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:05.609834   10053 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:05.609842   10053 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:05.609867   10053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b5:58:26:4b:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:05.611495   10053 main.go:141] libmachine: STDOUT: 
	I0812 03:45:05.611507   10053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:05.611527   10053 client.go:171] duration metric: took 280.910084ms to LocalClient.Create
	I0812 03:45:07.613700   10053 start.go:128] duration metric: took 2.30921575s to createHost
	I0812 03:45:07.613768   10053 start.go:83] releasing machines lock for "bridge-487000", held for 2.309359291s
	W0812 03:45:07.613884   10053 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:07.625766   10053 out.go:177] * Deleting "bridge-487000" in qemu2 ...
	W0812 03:45:07.650064   10053 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:07.650089   10053 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:12.652372   10053 start.go:360] acquireMachinesLock for bridge-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:12.652976   10053 start.go:364] duration metric: took 483.167µs to acquireMachinesLock for "bridge-487000"
	I0812 03:45:12.653140   10053 start.go:93] Provisioning new machine with config: &{Name:bridge-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:12.653357   10053 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:12.662934   10053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:45:12.712753   10053 start.go:159] libmachine.API.Create for "bridge-487000" (driver="qemu2")
	I0812 03:45:12.712809   10053 client.go:168] LocalClient.Create starting
	I0812 03:45:12.712922   10053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:12.712983   10053 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:12.713005   10053 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:12.713062   10053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:12.713115   10053 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:12.713132   10053 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:12.713934   10053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:12.870885   10053 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:12.940715   10053 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:12.940720   10053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:12.940922   10053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:12.950584   10053 main.go:141] libmachine: STDOUT: 
	I0812 03:45:12.950603   10053 main.go:141] libmachine: STDERR: 
	I0812 03:45:12.950644   10053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2 +20000M
	I0812 03:45:12.958562   10053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:12.958580   10053 main.go:141] libmachine: STDERR: 
	I0812 03:45:12.958594   10053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:12.958598   10053 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:12.958607   10053 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:12.958639   10053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:86:43:a8:8e:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/bridge-487000/disk.qcow2
	I0812 03:45:12.960278   10053 main.go:141] libmachine: STDOUT: 
	I0812 03:45:12.960296   10053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:12.960309   10053 client.go:171] duration metric: took 247.498917ms to LocalClient.Create
	I0812 03:45:14.962461   10053 start.go:128] duration metric: took 2.309091167s to createHost
	I0812 03:45:14.962529   10053 start.go:83] releasing machines lock for "bridge-487000", held for 2.309561333s
	W0812 03:45:14.962953   10053 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:14.973569   10053 out.go:177] 
	W0812 03:45:14.977526   10053 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:14.977541   10053 out.go:239] * 
	* 
	W0812 03:45:14.978926   10053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:14.987426   10053 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-487000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.73568525s)

                                                
                                                
-- stdout --
	* [kubenet-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-487000" primary control-plane node in "kubenet-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:17.148009   10164 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:17.148146   10164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:17.148157   10164 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:17.148160   10164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:17.148290   10164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:17.149351   10164 out.go:298] Setting JSON to false
	I0812 03:45:17.165505   10164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6287,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:45:17.165576   10164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:45:17.171514   10164 out.go:177] * [kubenet-487000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:45:17.177564   10164 notify.go:220] Checking for updates...
	I0812 03:45:17.182537   10164 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:45:17.185568   10164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:45:17.189402   10164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:45:17.192544   10164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:45:17.195566   10164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:45:17.198498   10164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:45:17.201861   10164 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:45:17.201937   10164 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:45:17.201985   10164 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:45:17.206487   10164 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:45:17.213562   10164 start.go:297] selected driver: qemu2
	I0812 03:45:17.213566   10164 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:45:17.213573   10164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:45:17.215833   10164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:45:17.218553   10164 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:45:17.220084   10164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:45:17.220103   10164 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0812 03:45:17.220140   10164 start.go:340] cluster config:
	{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:17.223694   10164 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:17.231542   10164 out.go:177] * Starting "kubenet-487000" primary control-plane node in "kubenet-487000" cluster
	I0812 03:45:17.235538   10164 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:45:17.235557   10164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:45:17.235578   10164 cache.go:56] Caching tarball of preloaded images
	I0812 03:45:17.235643   10164 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:45:17.235648   10164 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:45:17.235738   10164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kubenet-487000/config.json ...
	I0812 03:45:17.235749   10164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/kubenet-487000/config.json: {Name:mk577b95c5b2fe524ce6cc16595f6bc61792447f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:45:17.236155   10164 start.go:360] acquireMachinesLock for kubenet-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:17.236189   10164 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "kubenet-487000"
	I0812 03:45:17.236201   10164 start.go:93] Provisioning new machine with config: &{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:17.236232   10164 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:17.244471   10164 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:45:17.260994   10164 start.go:159] libmachine.API.Create for "kubenet-487000" (driver="qemu2")
	I0812 03:45:17.261033   10164 client.go:168] LocalClient.Create starting
	I0812 03:45:17.261109   10164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:17.261141   10164 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:17.261153   10164 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:17.261195   10164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:17.261223   10164 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:17.261231   10164 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:17.261579   10164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:17.410769   10164 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:17.480777   10164 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:17.480785   10164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:17.480995   10164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:17.490434   10164 main.go:141] libmachine: STDOUT: 
	I0812 03:45:17.490452   10164 main.go:141] libmachine: STDERR: 
	I0812 03:45:17.490502   10164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2 +20000M
	I0812 03:45:17.498599   10164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:17.498613   10164 main.go:141] libmachine: STDERR: 
	I0812 03:45:17.498628   10164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:17.498631   10164 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:17.498654   10164 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:17.498687   10164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0f:fb:a3:5a:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:17.500354   10164 main.go:141] libmachine: STDOUT: 
	I0812 03:45:17.500377   10164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:17.500396   10164 client.go:171] duration metric: took 239.360084ms to LocalClient.Create
	I0812 03:45:19.502593   10164 start.go:128] duration metric: took 2.266361542s to createHost
	I0812 03:45:19.502681   10164 start.go:83] releasing machines lock for "kubenet-487000", held for 2.26651425s
	W0812 03:45:19.502829   10164 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:19.519068   10164 out.go:177] * Deleting "kubenet-487000" in qemu2 ...
	W0812 03:45:19.547604   10164 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:19.547644   10164 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:24.549840   10164 start.go:360] acquireMachinesLock for kubenet-487000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:24.550455   10164 start.go:364] duration metric: took 496.375µs to acquireMachinesLock for "kubenet-487000"
	I0812 03:45:24.550534   10164 start.go:93] Provisioning new machine with config: &{Name:kubenet-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:24.550919   10164 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:24.555698   10164 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 03:45:24.604451   10164 start.go:159] libmachine.API.Create for "kubenet-487000" (driver="qemu2")
	I0812 03:45:24.604506   10164 client.go:168] LocalClient.Create starting
	I0812 03:45:24.604659   10164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:24.604730   10164 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:24.604748   10164 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:24.604808   10164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:24.604853   10164 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:24.604865   10164 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:24.605496   10164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:24.766291   10164 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:24.797871   10164 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:24.797876   10164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:24.798094   10164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:24.807693   10164 main.go:141] libmachine: STDOUT: 
	I0812 03:45:24.807711   10164 main.go:141] libmachine: STDERR: 
	I0812 03:45:24.807770   10164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2 +20000M
	I0812 03:45:24.815681   10164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:24.815696   10164 main.go:141] libmachine: STDERR: 
	I0812 03:45:24.815709   10164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:24.815714   10164 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:24.815724   10164 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:24.815760   10164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7d:97:1f:f1:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/kubenet-487000/disk.qcow2
	I0812 03:45:24.817533   10164 main.go:141] libmachine: STDOUT: 
	I0812 03:45:24.817547   10164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:24.817558   10164 client.go:171] duration metric: took 213.048542ms to LocalClient.Create
	I0812 03:45:26.819644   10164 start.go:128] duration metric: took 2.268721s to createHost
	I0812 03:45:26.819684   10164 start.go:83] releasing machines lock for "kubenet-487000", held for 2.269238625s
	W0812 03:45:26.819837   10164 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:26.830237   10164 out.go:177] 
	W0812 03:45:26.834281   10164 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:26.834296   10164 out.go:239] * 
	* 
	W0812 03:45:26.835606   10164 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:26.847277   10164 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.732790875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-061000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-061000" primary control-plane node in "old-k8s-version-061000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-061000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:29.015968   10273 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:29.016099   10273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:29.016102   10273 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:29.016104   10273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:29.016259   10273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:29.017349   10273 out.go:298] Setting JSON to false
	I0812 03:45:29.033899   10273 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6299,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:45:29.033985   10273 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:45:29.038373   10273 out.go:177] * [old-k8s-version-061000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:45:29.047006   10273 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:45:29.047059   10273 notify.go:220] Checking for updates...
	I0812 03:45:29.054894   10273 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:45:29.057976   10273 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:45:29.060923   10273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:45:29.063962   10273 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:45:29.066952   10273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:45:29.070222   10273 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:45:29.070290   10273 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:45:29.070333   10273 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:45:29.074928   10273 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:45:29.080934   10273 start.go:297] selected driver: qemu2
	I0812 03:45:29.080940   10273 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:45:29.080945   10273 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:45:29.083219   10273 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:45:29.085888   10273 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:45:29.088963   10273 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:45:29.088998   10273 cni.go:84] Creating CNI manager for ""
	I0812 03:45:29.089005   10273 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 03:45:29.089038   10273 start.go:340] cluster config:
	{Name:old-k8s-version-061000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:29.092631   10273 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:29.099882   10273 out.go:177] * Starting "old-k8s-version-061000" primary control-plane node in "old-k8s-version-061000" cluster
	I0812 03:45:29.103937   10273 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:45:29.103952   10273 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:45:29.103959   10273 cache.go:56] Caching tarball of preloaded images
	I0812 03:45:29.104008   10273 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:45:29.104013   10273 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0812 03:45:29.104092   10273 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/old-k8s-version-061000/config.json ...
	I0812 03:45:29.104102   10273 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/old-k8s-version-061000/config.json: {Name:mk08d6d9197b77acd754091fb825f0bdd361a4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:45:29.104479   10273 start.go:360] acquireMachinesLock for old-k8s-version-061000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:29.104508   10273 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "old-k8s-version-061000"
	I0812 03:45:29.104519   10273 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:29.104545   10273 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:29.113029   10273 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:45:29.127890   10273 start.go:159] libmachine.API.Create for "old-k8s-version-061000" (driver="qemu2")
	I0812 03:45:29.127921   10273 client.go:168] LocalClient.Create starting
	I0812 03:45:29.127977   10273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:29.128007   10273 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:29.128015   10273 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:29.128058   10273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:29.128082   10273 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:29.128091   10273 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:29.128421   10273 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:29.278353   10273 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:29.336090   10273 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:29.336096   10273 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:29.336328   10273 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:29.345637   10273 main.go:141] libmachine: STDOUT: 
	I0812 03:45:29.345663   10273 main.go:141] libmachine: STDERR: 
	I0812 03:45:29.345709   10273 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2 +20000M
	I0812 03:45:29.353685   10273 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:29.353703   10273 main.go:141] libmachine: STDERR: 
	I0812 03:45:29.353721   10273 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:29.353726   10273 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:29.353736   10273 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:29.353762   10273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:64:51:cb:3f:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:29.355474   10273 main.go:141] libmachine: STDOUT: 
	I0812 03:45:29.355488   10273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:29.355504   10273 client.go:171] duration metric: took 227.581542ms to LocalClient.Create
	I0812 03:45:31.357684   10273 start.go:128] duration metric: took 2.253142416s to createHost
	I0812 03:45:31.357814   10273 start.go:83] releasing machines lock for "old-k8s-version-061000", held for 2.253315416s
	W0812 03:45:31.357862   10273 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:31.371240   10273 out.go:177] * Deleting "old-k8s-version-061000" in qemu2 ...
	W0812 03:45:31.398174   10273 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:31.398210   10273 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:36.400236   10273 start.go:360] acquireMachinesLock for old-k8s-version-061000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:36.400367   10273 start.go:364] duration metric: took 104.959µs to acquireMachinesLock for "old-k8s-version-061000"
	I0812 03:45:36.400400   10273 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:36.400472   10273 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:36.408185   10273 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:45:36.426192   10273 start.go:159] libmachine.API.Create for "old-k8s-version-061000" (driver="qemu2")
	I0812 03:45:36.426220   10273 client.go:168] LocalClient.Create starting
	I0812 03:45:36.426297   10273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:36.426331   10273 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:36.426340   10273 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:36.426373   10273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:36.426397   10273 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:36.426402   10273 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:36.426757   10273 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:36.579676   10273 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:36.666149   10273 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:36.666157   10273 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:36.666378   10273 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:36.676009   10273 main.go:141] libmachine: STDOUT: 
	I0812 03:45:36.676029   10273 main.go:141] libmachine: STDERR: 
	I0812 03:45:36.676080   10273 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2 +20000M
	I0812 03:45:36.684422   10273 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:36.684436   10273 main.go:141] libmachine: STDERR: 
	I0812 03:45:36.684449   10273 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:36.684454   10273 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:36.684466   10273 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:36.684505   10273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:c3:19:41:14:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:36.686273   10273 main.go:141] libmachine: STDOUT: 
	I0812 03:45:36.686289   10273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:36.686301   10273 client.go:171] duration metric: took 260.081042ms to LocalClient.Create
	I0812 03:45:38.688330   10273 start.go:128] duration metric: took 2.287881792s to createHost
	I0812 03:45:38.688351   10273 start.go:83] releasing machines lock for "old-k8s-version-061000", held for 2.288003334s
	W0812 03:45:38.688423   10273 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:38.697636   10273 out.go:177] 
	W0812 03:45:38.701638   10273 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:38.701644   10273 out.go:239] * 
	* 
	W0812 03:45:38.702128   10273 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:38.709640   10273 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.4355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-061000 create -f testdata/busybox.yaml: exit status 1 (27.277416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-061000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-061000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.023083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.466292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-061000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-061000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-061000 describe deploy/metrics-server -n kube-system: exit status 1 (27.14725ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-061000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-061000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.006583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.183792583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-061000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-061000" primary control-plane node in "old-k8s-version-061000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:41.238436   10319 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:41.238569   10319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:41.238573   10319 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:41.238575   10319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:41.238695   10319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:41.239586   10319 out.go:298] Setting JSON to false
	I0812 03:45:41.255921   10319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6311,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:45:41.255993   10319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:45:41.259908   10319 out.go:177] * [old-k8s-version-061000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:45:41.266870   10319 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:45:41.266938   10319 notify.go:220] Checking for updates...
	I0812 03:45:41.274843   10319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:45:41.277864   10319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:45:41.280884   10319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:45:41.283845   10319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:45:41.286864   10319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:45:41.290194   10319 config.go:182] Loaded profile config "old-k8s-version-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0812 03:45:41.293799   10319 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 03:45:41.296876   10319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:45:41.299839   10319 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:45:41.306886   10319 start.go:297] selected driver: qemu2
	I0812 03:45:41.306893   10319 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:41.306992   10319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:45:41.309405   10319 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:45:41.309431   10319 cni.go:84] Creating CNI manager for ""
	I0812 03:45:41.309440   10319 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 03:45:41.309466   10319 start.go:340] cluster config:
	{Name:old-k8s-version-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-061000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:41.313148   10319 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:41.319869   10319 out.go:177] * Starting "old-k8s-version-061000" primary control-plane node in "old-k8s-version-061000" cluster
	I0812 03:45:41.323848   10319 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:45:41.323861   10319 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:45:41.323869   10319 cache.go:56] Caching tarball of preloaded images
	I0812 03:45:41.323930   10319 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:45:41.323935   10319 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0812 03:45:41.323983   10319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/old-k8s-version-061000/config.json ...
	I0812 03:45:41.324446   10319 start.go:360] acquireMachinesLock for old-k8s-version-061000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:41.324473   10319 start.go:364] duration metric: took 21.5µs to acquireMachinesLock for "old-k8s-version-061000"
	I0812 03:45:41.324482   10319 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:45:41.324490   10319 fix.go:54] fixHost starting: 
	I0812 03:45:41.324596   10319 fix.go:112] recreateIfNeeded on old-k8s-version-061000: state=Stopped err=<nil>
	W0812 03:45:41.324603   10319 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:45:41.326227   10319 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-061000" ...
	I0812 03:45:41.333848   10319 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:41.333889   10319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:c3:19:41:14:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:41.335841   10319 main.go:141] libmachine: STDOUT: 
	I0812 03:45:41.335858   10319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:41.335885   10319 fix.go:56] duration metric: took 11.397375ms for fixHost
	I0812 03:45:41.335890   10319 start.go:83] releasing machines lock for "old-k8s-version-061000", held for 11.413292ms
	W0812 03:45:41.335895   10319 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:41.335940   10319 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:41.335945   10319 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:46.337715   10319 start.go:360] acquireMachinesLock for old-k8s-version-061000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:46.338248   10319 start.go:364] duration metric: took 414.125µs to acquireMachinesLock for "old-k8s-version-061000"
	I0812 03:45:46.338393   10319 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:45:46.338411   10319 fix.go:54] fixHost starting: 
	I0812 03:45:46.338992   10319 fix.go:112] recreateIfNeeded on old-k8s-version-061000: state=Stopped err=<nil>
	W0812 03:45:46.339013   10319 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:45:46.346640   10319 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-061000" ...
	I0812 03:45:46.350711   10319 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:46.350873   10319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:c3:19:41:14:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/old-k8s-version-061000/disk.qcow2
	I0812 03:45:46.359790   10319 main.go:141] libmachine: STDOUT: 
	I0812 03:45:46.359848   10319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:46.359942   10319 fix.go:56] duration metric: took 21.531833ms for fixHost
	I0812 03:45:46.359960   10319 start.go:83] releasing machines lock for "old-k8s-version-061000", held for 21.694167ms
	W0812 03:45:46.360101   10319 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:46.368688   10319 out.go:177] 
	W0812 03:45:46.372750   10319 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:46.372793   10319 out.go:239] * 
	* 
	W0812 03:45:46.374308   10319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:46.382765   10319 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-061000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (60.808458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-061000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (31.854333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-061000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.935292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-061000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.708792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-061000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (28.86525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-061000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-061000 --alsologtostderr -v=1: exit status 83 (42.434167ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-061000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-061000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:46.643918   10338 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:46.644919   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:46.644923   10338 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:46.644925   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:46.645081   10338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:46.645296   10338 out.go:298] Setting JSON to false
	I0812 03:45:46.645303   10338 mustload.go:65] Loading cluster: old-k8s-version-061000
	I0812 03:45:46.645488   10338 config.go:182] Loaded profile config "old-k8s-version-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0812 03:45:46.650277   10338 out.go:177] * The control-plane node old-k8s-version-061000 host is not running: state=Stopped
	I0812 03:45:46.654190   10338 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-061000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-061000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (29.076708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (28.4405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.921733083s)

                                                
                                                
-- stdout --
	* [no-preload-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-120000" primary control-plane node in "no-preload-120000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-120000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:45:46.958841   10355 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:45:46.958977   10355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:46.958981   10355 out.go:304] Setting ErrFile to fd 2...
	I0812 03:45:46.958984   10355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:45:46.959129   10355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:45:46.960260   10355 out.go:298] Setting JSON to false
	I0812 03:45:46.977087   10355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6316,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:45:46.977176   10355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:45:46.981139   10355 out.go:177] * [no-preload-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:45:46.988166   10355 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:45:46.988216   10355 notify.go:220] Checking for updates...
	I0812 03:45:46.995167   10355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:45:46.998164   10355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:45:47.001149   10355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:45:47.004130   10355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:45:47.007031   10355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:45:47.010400   10355 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:45:47.010468   10355 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:45:47.010514   10355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:45:47.015106   10355 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:45:47.022208   10355 start.go:297] selected driver: qemu2
	I0812 03:45:47.022213   10355 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:45:47.022219   10355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:45:47.024366   10355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:45:47.027109   10355 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:45:47.028565   10355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:45:47.028596   10355 cni.go:84] Creating CNI manager for ""
	I0812 03:45:47.028603   10355 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:45:47.028606   10355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:45:47.028645   10355 start.go:340] cluster config:
	{Name:no-preload-120000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:45:47.032123   10355 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.040206   10355 out.go:177] * Starting "no-preload-120000" primary control-plane node in "no-preload-120000" cluster
	I0812 03:45:47.044126   10355 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:45:47.044206   10355 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/no-preload-120000/config.json ...
	I0812 03:45:47.044225   10355 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/no-preload-120000/config.json: {Name:mkfdd5c737dd8b7cff83c63690d0399c1a0bcd79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:45:47.044232   10355 cache.go:107] acquiring lock: {Name:mk6240b1db886ed6c78a6e21c36c3453b00d55e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044239   10355 cache.go:107] acquiring lock: {Name:mk3ce67963ce86ed344585bd6c0d2a481550e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044270   10355 cache.go:107] acquiring lock: {Name:mk84856e4681ade0de1c4acca7fc7801b3c97f50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044309   10355 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0812 03:45:47.044317   10355 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.709µs
	I0812 03:45:47.044323   10355 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0812 03:45:47.044331   10355 cache.go:107] acquiring lock: {Name:mk9203fb9c6b82dd527407fb66c06c01b91a8699 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044397   10355 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0812 03:45:47.044419   10355 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0812 03:45:47.044435   10355 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0812 03:45:47.044452   10355 cache.go:107] acquiring lock: {Name:mka8529738e983c88d9b5b609445c658a4102b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044534   10355 cache.go:107] acquiring lock: {Name:mkd51c8e78a2c31d22b94f298e6348e7a7b91439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044551   10355 cache.go:107] acquiring lock: {Name:mk53979183e93af4cecdc6a3e2837cf048ee1445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044571   10355 cache.go:107] acquiring lock: {Name:mkdb2237fd6622988d081e3465e7850d01d7aaed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:45:47.044595   10355 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0812 03:45:47.044631   10355 start.go:360] acquireMachinesLock for no-preload-120000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:47.044664   10355 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "no-preload-120000"
	I0812 03:45:47.044672   10355 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0812 03:45:47.044677   10355 start.go:93] Provisioning new machine with config: &{Name:no-preload-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:47.044702   10355 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0812 03:45:47.044705   10355 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:47.044678   10355 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0812 03:45:47.048986   10355 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:45:47.056108   10355 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0812 03:45:47.056195   10355 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0812 03:45:47.056218   10355 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0812 03:45:47.056238   10355 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0812 03:45:47.058156   10355 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0812 03:45:47.058235   10355 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0812 03:45:47.058281   10355 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0812 03:45:47.064784   10355 start.go:159] libmachine.API.Create for "no-preload-120000" (driver="qemu2")
	I0812 03:45:47.064803   10355 client.go:168] LocalClient.Create starting
	I0812 03:45:47.064881   10355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:47.064911   10355 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:47.064923   10355 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:47.064963   10355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:47.064985   10355 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:47.064994   10355 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:47.065349   10355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:47.220658   10355 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:47.352742   10355 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:47.352760   10355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:47.352986   10355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:47.362517   10355 main.go:141] libmachine: STDOUT: 
	I0812 03:45:47.362533   10355 main.go:141] libmachine: STDERR: 
	I0812 03:45:47.362575   10355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2 +20000M
	I0812 03:45:47.370654   10355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:47.370669   10355 main.go:141] libmachine: STDERR: 
	I0812 03:45:47.370681   10355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:47.370686   10355 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:47.370701   10355 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:47.370731   10355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:75:87:29:31:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:47.372631   10355 main.go:141] libmachine: STDOUT: 
	I0812 03:45:47.372660   10355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:47.372680   10355 client.go:171] duration metric: took 307.871875ms to LocalClient.Create
	I0812 03:45:47.409829   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0812 03:45:47.429765   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0812 03:45:47.464835   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0812 03:45:47.464999   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0812 03:45:47.497749   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0812 03:45:47.505243   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0812 03:45:47.533202   10355 cache.go:162] opening:  /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0812 03:45:47.568922   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0812 03:45:47.568934   10355 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 524.4785ms
	I0812 03:45:47.568942   10355 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0812 03:45:49.372781   10355 start.go:128] duration metric: took 2.328092208s to createHost
	I0812 03:45:49.372820   10355 start.go:83] releasing machines lock for "no-preload-120000", held for 2.328184458s
	W0812 03:45:49.372855   10355 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:49.379289   10355 out.go:177] * Deleting "no-preload-120000" in qemu2 ...
	W0812 03:45:49.403991   10355 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:49.404006   10355 start.go:729] Will try again in 5 seconds ...
	I0812 03:45:49.933452   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0812 03:45:49.933496   10355 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 2.889073291s
	I0812 03:45:49.933516   10355 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0812 03:45:50.788310   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0812 03:45:50.788327   10355 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.7439725s
	I0812 03:45:50.788334   10355 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0812 03:45:51.007915   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0812 03:45:51.007932   10355 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 3.963736917s
	I0812 03:45:51.007942   10355 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0812 03:45:51.551282   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0812 03:45:51.551299   10355 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 4.50713275s
	I0812 03:45:51.551308   10355 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0812 03:45:51.581159   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0812 03:45:51.581169   10355 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 4.53674175s
	I0812 03:45:51.581175   10355 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0812 03:45:54.405124   10355 start.go:360] acquireMachinesLock for no-preload-120000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:45:54.405221   10355 start.go:364] duration metric: took 78.125µs to acquireMachinesLock for "no-preload-120000"
	I0812 03:45:54.405233   10355 start.go:93] Provisioning new machine with config: &{Name:no-preload-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:45:54.405275   10355 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:45:54.413396   10355 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:45:54.430253   10355 start.go:159] libmachine.API.Create for "no-preload-120000" (driver="qemu2")
	I0812 03:45:54.430283   10355 client.go:168] LocalClient.Create starting
	I0812 03:45:54.430372   10355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:45:54.430416   10355 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:54.430428   10355 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:54.430478   10355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:45:54.430502   10355 main.go:141] libmachine: Decoding PEM data...
	I0812 03:45:54.430508   10355 main.go:141] libmachine: Parsing certificate...
	I0812 03:45:54.430839   10355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:45:54.729681   10355 main.go:141] libmachine: Creating SSH key...
	I0812 03:45:54.784945   10355 main.go:141] libmachine: Creating Disk image...
	I0812 03:45:54.784953   10355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:45:54.788067   10355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:54.800610   10355 main.go:141] libmachine: STDOUT: 
	I0812 03:45:54.800631   10355 main.go:141] libmachine: STDERR: 
	I0812 03:45:54.800701   10355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2 +20000M
	I0812 03:45:54.810143   10355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:45:54.810170   10355 main.go:141] libmachine: STDERR: 
	I0812 03:45:54.810181   10355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:54.810191   10355 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:45:54.810198   10355 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:45:54.810240   10355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:4f:71:a1:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:45:54.812408   10355 main.go:141] libmachine: STDOUT: 
	I0812 03:45:54.812523   10355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:45:54.812537   10355 client.go:171] duration metric: took 382.256208ms to LocalClient.Create
	I0812 03:45:55.103788   10355 cache.go:157] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0812 03:45:55.103815   10355 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.059588125s
	I0812 03:45:55.103830   10355 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0812 03:45:55.103845   10355 cache.go:87] Successfully saved all images to host disk.
	I0812 03:45:56.814699   10355 start.go:128] duration metric: took 2.409421667s to createHost
	I0812 03:45:56.814787   10355 start.go:83] releasing machines lock for "no-preload-120000", held for 2.40958825s
	W0812 03:45:56.815155   10355 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:45:56.824543   10355 out.go:177] 
	W0812 03:45:56.828676   10355 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:45:56.828754   10355 out.go:239] * 
	* 
	W0812 03:45:56.831442   10355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:45:56.838614   10355 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (63.496833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-120000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-120000 create -f testdata/busybox.yaml: exit status 1 (30.387708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-120000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (28.773292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (29.158458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-120000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-120000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-120000 describe deploy/metrics-server -n kube-system: exit status 1 (27.919083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-120000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (29.316875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.182608791s)

                                                
                                                
-- stdout --
	* [no-preload-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-120000" primary control-plane node in "no-preload-120000" cluster
	* Restarting existing qemu2 VM for "no-preload-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:00.752564   10436 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:00.752717   10436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:00.752725   10436 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:00.752729   10436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:00.752867   10436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:00.753923   10436 out.go:298] Setting JSON to false
	I0812 03:46:00.770444   10436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6330,"bootTime":1723453230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:00.770524   10436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:00.775487   10436 out.go:177] * [no-preload-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:00.781385   10436 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:00.781463   10436 notify.go:220] Checking for updates...
	I0812 03:46:00.789325   10436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:00.792378   10436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:00.795468   10436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:00.798405   10436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:00.801448   10436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:00.804643   10436 config.go:182] Loaded profile config "no-preload-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0812 03:46:00.804911   10436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:00.809441   10436 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:46:00.816423   10436 start.go:297] selected driver: qemu2
	I0812 03:46:00.816430   10436 start.go:901] validating driver "qemu2" against &{Name:no-preload-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:00.816477   10436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:00.818904   10436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:46:00.818944   10436 cni.go:84] Creating CNI manager for ""
	I0812 03:46:00.818952   10436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:00.818985   10436 start.go:340] cluster config:
	{Name:no-preload-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-120000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:00.822469   10436 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.830374   10436 out.go:177] * Starting "no-preload-120000" primary control-plane node in "no-preload-120000" cluster
	I0812 03:46:00.834448   10436 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:46:00.834534   10436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/no-preload-120000/config.json ...
	I0812 03:46:00.834549   10436 cache.go:107] acquiring lock: {Name:mk3ce67963ce86ed344585bd6c0d2a481550e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834543   10436 cache.go:107] acquiring lock: {Name:mk84856e4681ade0de1c4acca7fc7801b3c97f50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834555   10436 cache.go:107] acquiring lock: {Name:mk6240b1db886ed6c78a6e21c36c3453b00d55e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834617   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0812 03:46:00.834625   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0812 03:46:00.834623   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0812 03:46:00.834627   10436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.916µs
	I0812 03:46:00.834629   10436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 88.125µs
	I0812 03:46:00.834630   10436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 94.709µs
	I0812 03:46:00.834634   10436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0812 03:46:00.834636   10436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0812 03:46:00.834633   10436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0812 03:46:00.834648   10436 cache.go:107] acquiring lock: {Name:mkdb2237fd6622988d081e3465e7850d01d7aaed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834658   10436 cache.go:107] acquiring lock: {Name:mkd51c8e78a2c31d22b94f298e6348e7a7b91439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834688   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0812 03:46:00.834640   10436 cache.go:107] acquiring lock: {Name:mka8529738e983c88d9b5b609445c658a4102b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834692   10436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 44.708µs
	I0812 03:46:00.834696   10436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0812 03:46:00.834697   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0812 03:46:00.834701   10436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 43.583µs
	I0812 03:46:00.834704   10436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0812 03:46:00.834649   10436 cache.go:107] acquiring lock: {Name:mk9203fb9c6b82dd527407fb66c06c01b91a8699 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834711   10436 cache.go:107] acquiring lock: {Name:mk53979183e93af4cecdc6a3e2837cf048ee1445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:00.834746   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0812 03:46:00.834749   10436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 110.25µs
	I0812 03:46:00.834752   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0812 03:46:00.834756   10436 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 108.042µs
	I0812 03:46:00.834759   10436 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0812 03:46:00.834753   10436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0812 03:46:00.834762   10436 cache.go:115] /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0812 03:46:00.834766   10436 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 81.084µs
	I0812 03:46:00.834774   10436 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0812 03:46:00.834779   10436 cache.go:87] Successfully saved all images to host disk.
	I0812 03:46:00.834952   10436 start.go:360] acquireMachinesLock for no-preload-120000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:00.834983   10436 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "no-preload-120000"
	I0812 03:46:00.834993   10436 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:00.834999   10436 fix.go:54] fixHost starting: 
	I0812 03:46:00.835099   10436 fix.go:112] recreateIfNeeded on no-preload-120000: state=Stopped err=<nil>
	W0812 03:46:00.835110   10436 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:00.843431   10436 out.go:177] * Restarting existing qemu2 VM for "no-preload-120000" ...
	I0812 03:46:00.847364   10436 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:00.847403   10436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:4f:71:a1:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:46:00.849463   10436 main.go:141] libmachine: STDOUT: 
	I0812 03:46:00.849482   10436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:00.849510   10436 fix.go:56] duration metric: took 14.5115ms for fixHost
	I0812 03:46:00.849514   10436 start.go:83] releasing machines lock for "no-preload-120000", held for 14.527417ms
	W0812 03:46:00.849520   10436 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:00.849549   10436 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:00.849554   10436 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:05.851694   10436 start.go:360] acquireMachinesLock for no-preload-120000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:05.852072   10436 start.go:364] duration metric: took 287.833µs to acquireMachinesLock for "no-preload-120000"
	I0812 03:46:05.852116   10436 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:05.852134   10436 fix.go:54] fixHost starting: 
	I0812 03:46:05.852637   10436 fix.go:112] recreateIfNeeded on no-preload-120000: state=Stopped err=<nil>
	W0812 03:46:05.852651   10436 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:05.856914   10436 out.go:177] * Restarting existing qemu2 VM for "no-preload-120000" ...
	I0812 03:46:05.864793   10436 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:05.864967   10436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:4f:71:a1:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/no-preload-120000/disk.qcow2
	I0812 03:46:05.872307   10436 main.go:141] libmachine: STDOUT: 
	I0812 03:46:05.872378   10436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:05.872444   10436 fix.go:56] duration metric: took 20.310708ms for fixHost
	I0812 03:46:05.872459   10436 start.go:83] releasing machines lock for "no-preload-120000", held for 20.3735ms
	W0812 03:46:05.872606   10436 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:05.880926   10436 out.go:177] 
	W0812 03:46:05.884046   10436 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:05.884068   10436 out.go:239] * 
	* 
	W0812 03:46:05.885565   10436 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:05.899910   10436 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-120000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (53.505666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-120000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (30.7795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-120000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.691166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (28.796375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-120000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (28.847459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-120000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-120000 --alsologtostderr -v=1: exit status 83 (40.304875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-120000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-120000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:06.145499   10455 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:06.145671   10455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:06.145675   10455 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:06.145677   10455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:06.145807   10455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:06.146026   10455 out.go:298] Setting JSON to false
	I0812 03:46:06.146034   10455 mustload.go:65] Loading cluster: no-preload-120000
	I0812 03:46:06.146223   10455 config.go:182] Loaded profile config "no-preload-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0812 03:46:06.151100   10455 out.go:177] * The control-plane node no-preload-120000 host is not running: state=Stopped
	I0812 03:46:06.155047   10455 out.go:177]   To start a cluster, run: "minikube start -p no-preload-120000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-120000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (29.147667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (28.42475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.09811625s)

                                                
                                                
-- stdout --
	* [embed-certs-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-397000" primary control-plane node in "embed-certs-397000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-397000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:06.460006   10472 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:06.460155   10472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:06.460158   10472 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:06.460160   10472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:06.460282   10472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:06.461317   10472 out.go:298] Setting JSON to false
	I0812 03:46:06.477528   10472 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6336,"bootTime":1723453230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:06.477610   10472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:06.482302   10472 out.go:177] * [embed-certs-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:06.489322   10472 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:06.489388   10472 notify.go:220] Checking for updates...
	I0812 03:46:06.497276   10472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:06.500298   10472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:06.503280   10472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:06.506296   10472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:06.509246   10472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:06.512534   10472 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:06.512597   10472 config.go:182] Loaded profile config "stopped-upgrade-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0812 03:46:06.512648   10472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:06.517240   10472 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:46:06.524303   10472 start.go:297] selected driver: qemu2
	I0812 03:46:06.524309   10472 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:46:06.524318   10472 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:06.526480   10472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:46:06.529223   10472 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:46:06.532291   10472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:46:06.532313   10472 cni.go:84] Creating CNI manager for ""
	I0812 03:46:06.532320   10472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:06.532329   10472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:46:06.532366   10472 start.go:340] cluster config:
	{Name:embed-certs-397000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:06.535883   10472 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:06.543217   10472 out.go:177] * Starting "embed-certs-397000" primary control-plane node in "embed-certs-397000" cluster
	I0812 03:46:06.547282   10472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:46:06.547298   10472 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:46:06.547307   10472 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:06.547368   10472 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:06.547373   10472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:46:06.547451   10472 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/embed-certs-397000/config.json ...
	I0812 03:46:06.547463   10472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/embed-certs-397000/config.json: {Name:mkd3653010790147faeaccc7aaaea6966f2a62f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:46:06.547830   10472 start.go:360] acquireMachinesLock for embed-certs-397000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:06.547864   10472 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "embed-certs-397000"
	I0812 03:46:06.547876   10472 start.go:93] Provisioning new machine with config: &{Name:embed-certs-397000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:06.547905   10472 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:06.552260   10472 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:06.569183   10472 start.go:159] libmachine.API.Create for "embed-certs-397000" (driver="qemu2")
	I0812 03:46:06.569218   10472 client.go:168] LocalClient.Create starting
	I0812 03:46:06.569284   10472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:06.569321   10472 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:06.569330   10472 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:06.569385   10472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:06.569407   10472 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:06.569415   10472 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:06.569796   10472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:06.722166   10472 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:06.832642   10472 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:06.832649   10472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:06.832857   10472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:06.842091   10472 main.go:141] libmachine: STDOUT: 
	I0812 03:46:06.842117   10472 main.go:141] libmachine: STDERR: 
	I0812 03:46:06.842170   10472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2 +20000M
	I0812 03:46:06.850134   10472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:06.850147   10472 main.go:141] libmachine: STDERR: 
	I0812 03:46:06.850171   10472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:06.850175   10472 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:06.850188   10472 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:06.850215   10472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:68:fa:a9:ab:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:06.851921   10472 main.go:141] libmachine: STDOUT: 
	I0812 03:46:06.851937   10472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:06.851953   10472 client.go:171] duration metric: took 282.734333ms to LocalClient.Create
	I0812 03:46:08.854145   10472 start.go:128] duration metric: took 2.306240834s to createHost
	I0812 03:46:08.854220   10472 start.go:83] releasing machines lock for "embed-certs-397000", held for 2.30637875s
	W0812 03:46:08.854370   10472 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:08.866898   10472 out.go:177] * Deleting "embed-certs-397000" in qemu2 ...
	W0812 03:46:08.897791   10472 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:08.897823   10472 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:13.899955   10472 start.go:360] acquireMachinesLock for embed-certs-397000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:13.900412   10472 start.go:364] duration metric: took 361.334µs to acquireMachinesLock for "embed-certs-397000"
	I0812 03:46:13.900538   10472 start.go:93] Provisioning new machine with config: &{Name:embed-certs-397000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:13.900837   10472 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:13.910383   10472 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:13.961504   10472 start.go:159] libmachine.API.Create for "embed-certs-397000" (driver="qemu2")
	I0812 03:46:13.961560   10472 client.go:168] LocalClient.Create starting
	I0812 03:46:13.961693   10472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:13.961761   10472 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:13.961779   10472 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:13.961864   10472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:13.961908   10472 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:13.961928   10472 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:13.963088   10472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:14.135009   10472 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:14.466536   10472 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:14.466552   10472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:14.466801   10472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:14.476566   10472 main.go:141] libmachine: STDOUT: 
	I0812 03:46:14.476585   10472 main.go:141] libmachine: STDERR: 
	I0812 03:46:14.476631   10472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2 +20000M
	I0812 03:46:14.484608   10472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:14.484633   10472 main.go:141] libmachine: STDERR: 
	I0812 03:46:14.484647   10472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:14.484652   10472 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:14.484660   10472 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:14.484697   10472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8f:04:6c:ee:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:14.486336   10472 main.go:141] libmachine: STDOUT: 
	I0812 03:46:14.486356   10472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:14.486373   10472 client.go:171] duration metric: took 524.816208ms to LocalClient.Create
	I0812 03:46:16.488505   10472 start.go:128] duration metric: took 2.5876585s to createHost
	I0812 03:46:16.488552   10472 start.go:83] releasing machines lock for "embed-certs-397000", held for 2.588151625s
	W0812 03:46:16.489018   10472 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:16.500085   10472 out.go:177] 
	W0812 03:46:16.504240   10472 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:16.504284   10472 out.go:239] * 
	* 
	W0812 03:46:16.506887   10472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:16.516220   10472 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (64.930625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.918130917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-188000" primary control-plane node in "default-k8s-diff-port-188000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-188000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:10.976991   10492 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:10.977119   10492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:10.977122   10492 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:10.977125   10492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:10.977273   10492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:10.978305   10492 out.go:298] Setting JSON to false
	I0812 03:46:10.994182   10492 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6340,"bootTime":1723453230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:10.994250   10492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:10.999193   10492 out.go:177] * [default-k8s-diff-port-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:11.005120   10492 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:11.005174   10492 notify.go:220] Checking for updates...
	I0812 03:46:11.012194   10492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:11.015148   10492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:11.018188   10492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:11.021208   10492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:11.022794   10492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:11.026405   10492 config.go:182] Loaded profile config "embed-certs-397000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:11.026465   10492 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:11.026517   10492 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:11.030153   10492 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:46:11.036156   10492 start.go:297] selected driver: qemu2
	I0812 03:46:11.036163   10492 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:46:11.036170   10492 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:11.038332   10492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:46:11.041145   10492 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:46:11.044270   10492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:46:11.044318   10492 cni.go:84] Creating CNI manager for ""
	I0812 03:46:11.044327   10492 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:11.044332   10492 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:46:11.044365   10492 start.go:340] cluster config:
	{Name:default-k8s-diff-port-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:11.048075   10492 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:11.056179   10492 out.go:177] * Starting "default-k8s-diff-port-188000" primary control-plane node in "default-k8s-diff-port-188000" cluster
	I0812 03:46:11.060191   10492 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:46:11.060207   10492 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:46:11.060217   10492 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:11.060279   10492 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:11.060287   10492 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:46:11.060354   10492 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/default-k8s-diff-port-188000/config.json ...
	I0812 03:46:11.060365   10492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/default-k8s-diff-port-188000/config.json: {Name:mk0ad56c19c6c149c12575682f05b6f0989a30ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:46:11.060591   10492 start.go:360] acquireMachinesLock for default-k8s-diff-port-188000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:11.060627   10492 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "default-k8s-diff-port-188000"
	I0812 03:46:11.060640   10492 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:11.060666   10492 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:11.069140   10492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:11.086850   10492 start.go:159] libmachine.API.Create for "default-k8s-diff-port-188000" (driver="qemu2")
	I0812 03:46:11.086878   10492 client.go:168] LocalClient.Create starting
	I0812 03:46:11.086941   10492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:11.086979   10492 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:11.086987   10492 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:11.087025   10492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:11.087047   10492 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:11.087054   10492 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:11.087495   10492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:11.241288   10492 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:11.399229   10492 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:11.399235   10492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:11.399460   10492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:11.408948   10492 main.go:141] libmachine: STDOUT: 
	I0812 03:46:11.408964   10492 main.go:141] libmachine: STDERR: 
	I0812 03:46:11.409018   10492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2 +20000M
	I0812 03:46:11.416941   10492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:11.416953   10492 main.go:141] libmachine: STDERR: 
	I0812 03:46:11.416967   10492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:11.416971   10492 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:11.416983   10492 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:11.417008   10492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:7b:3f:79:3e:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:11.418615   10492 main.go:141] libmachine: STDOUT: 
	I0812 03:46:11.418631   10492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:11.418654   10492 client.go:171] duration metric: took 331.776208ms to LocalClient.Create
	I0812 03:46:13.420873   10492 start.go:128] duration metric: took 2.36021675s to createHost
	I0812 03:46:13.420920   10492 start.go:83] releasing machines lock for "default-k8s-diff-port-188000", held for 2.360316084s
	W0812 03:46:13.420983   10492 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:13.431140   10492 out.go:177] * Deleting "default-k8s-diff-port-188000" in qemu2 ...
	W0812 03:46:13.463072   10492 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:13.463109   10492 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:18.465312   10492 start.go:360] acquireMachinesLock for default-k8s-diff-port-188000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:18.465677   10492 start.go:364] duration metric: took 275.166µs to acquireMachinesLock for "default-k8s-diff-port-188000"
	I0812 03:46:18.465803   10492 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:18.466128   10492 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:18.475894   10492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:18.524731   10492 start.go:159] libmachine.API.Create for "default-k8s-diff-port-188000" (driver="qemu2")
	I0812 03:46:18.524779   10492 client.go:168] LocalClient.Create starting
	I0812 03:46:18.524879   10492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:18.524935   10492 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:18.524954   10492 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:18.525010   10492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:18.525039   10492 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:18.525049   10492 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:18.525825   10492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:18.686718   10492 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:18.796642   10492 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:18.796647   10492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:18.796843   10492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:18.806054   10492 main.go:141] libmachine: STDOUT: 
	I0812 03:46:18.806078   10492 main.go:141] libmachine: STDERR: 
	I0812 03:46:18.806132   10492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2 +20000M
	I0812 03:46:18.814145   10492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:18.814157   10492 main.go:141] libmachine: STDERR: 
	I0812 03:46:18.814175   10492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:18.814180   10492 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:18.814194   10492 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:18.814218   10492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9f:3a:e5:9f:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:18.815776   10492 main.go:141] libmachine: STDOUT: 
	I0812 03:46:18.815790   10492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:18.815802   10492 client.go:171] duration metric: took 291.020791ms to LocalClient.Create
	I0812 03:46:20.817957   10492 start.go:128] duration metric: took 2.351827292s to createHost
	I0812 03:46:20.818015   10492 start.go:83] releasing machines lock for "default-k8s-diff-port-188000", held for 2.352342041s
	W0812 03:46:20.818339   10492 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:20.841738   10492 out.go:177] 
	W0812 03:46:20.844847   10492 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:20.844874   10492 out.go:239] * 
	* 
	W0812 03:46:20.847519   10492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:20.853680   10492 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (58.682125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-397000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-397000 create -f testdata/busybox.yaml: exit status 1 (30.048584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-397000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-397000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (28.121542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (28.552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-397000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-397000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-397000 describe deploy/metrics-server -n kube-system: exit status 1 (26.960875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-397000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-397000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (28.814125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-188000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-188000 create -f testdata/busybox.yaml: exit status 1 (31.182542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-188000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (27.529667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (29.346791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.211223s)

                                                
                                                
-- stdout --
	* [embed-certs-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-397000" primary control-plane node in "embed-certs-397000" cluster
	* Restarting existing qemu2 VM for "embed-certs-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:21.024851   10556 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:21.025003   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:21.025007   10556 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:21.025017   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:21.025170   10556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:21.026174   10556 out.go:298] Setting JSON to false
	I0812 03:46:21.044247   10556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6351,"bootTime":1723453230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:21.044334   10556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:21.047775   10556 out.go:177] * [embed-certs-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:21.054880   10556 notify.go:220] Checking for updates...
	I0812 03:46:21.058677   10556 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:21.069715   10556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:21.077725   10556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:21.089682   10556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:21.092717   10556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:21.099732   10556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:21.103845   10556 config.go:182] Loaded profile config "embed-certs-397000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:21.104137   10556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:21.107732   10556 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:46:21.113712   10556 start.go:297] selected driver: qemu2
	I0812 03:46:21.113719   10556 start.go:901] validating driver "qemu2" against &{Name:embed-certs-397000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:21.113788   10556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:21.116357   10556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:46:21.116387   10556 cni.go:84] Creating CNI manager for ""
	I0812 03:46:21.116394   10556 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:21.116441   10556 start.go:340] cluster config:
	{Name:embed-certs-397000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-397000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:21.120391   10556 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:21.127701   10556 out.go:177] * Starting "embed-certs-397000" primary control-plane node in "embed-certs-397000" cluster
	I0812 03:46:21.131801   10556 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:46:21.131837   10556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:46:21.131848   10556 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:21.131923   10556 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:21.131929   10556 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:46:21.131999   10556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/embed-certs-397000/config.json ...
	I0812 03:46:21.132402   10556 start.go:360] acquireMachinesLock for embed-certs-397000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:21.132442   10556 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "embed-certs-397000"
	I0812 03:46:21.132451   10556 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:21.132457   10556 fix.go:54] fixHost starting: 
	I0812 03:46:21.132575   10556 fix.go:112] recreateIfNeeded on embed-certs-397000: state=Stopped err=<nil>
	W0812 03:46:21.132583   10556 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:21.140730   10556 out.go:177] * Restarting existing qemu2 VM for "embed-certs-397000" ...
	I0812 03:46:21.143700   10556 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:21.143763   10556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8f:04:6c:ee:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:21.145852   10556 main.go:141] libmachine: STDOUT: 
	I0812 03:46:21.145869   10556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:21.145897   10556 fix.go:56] duration metric: took 13.439042ms for fixHost
	I0812 03:46:21.145903   10556 start.go:83] releasing machines lock for "embed-certs-397000", held for 13.456459ms
	W0812 03:46:21.145909   10556 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:21.145950   10556 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:21.145954   10556 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:26.148103   10556 start.go:360] acquireMachinesLock for embed-certs-397000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:26.148543   10556 start.go:364] duration metric: took 312.042µs to acquireMachinesLock for "embed-certs-397000"
	I0812 03:46:26.148666   10556 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:26.148685   10556 fix.go:54] fixHost starting: 
	I0812 03:46:26.149377   10556 fix.go:112] recreateIfNeeded on embed-certs-397000: state=Stopped err=<nil>
	W0812 03:46:26.149400   10556 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:26.158895   10556 out.go:177] * Restarting existing qemu2 VM for "embed-certs-397000" ...
	I0812 03:46:26.162995   10556 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:26.163234   10556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8f:04:6c:ee:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/embed-certs-397000/disk.qcow2
	I0812 03:46:26.172275   10556 main.go:141] libmachine: STDOUT: 
	I0812 03:46:26.172347   10556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:26.172429   10556 fix.go:56] duration metric: took 23.745125ms for fixHost
	I0812 03:46:26.172448   10556 start.go:83] releasing machines lock for "embed-certs-397000", held for 23.886041ms
	W0812 03:46:26.172623   10556 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:26.179881   10556 out.go:177] 
	W0812 03:46:26.183988   10556 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:26.184012   10556 out.go:239] * 
	* 
	W0812 03:46:26.186505   10556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:26.193982   10556 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-397000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (65.261584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-188000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-188000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-188000 describe deploy/metrics-server -n kube-system: exit status 1 (27.887417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-188000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (28.679709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.193872334s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-188000" primary control-plane node in "default-k8s-diff-port-188000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:24.148363   10592 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:24.148502   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:24.148506   10592 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:24.148508   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:24.148647   10592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:24.149715   10592 out.go:298] Setting JSON to false
	I0812 03:46:24.165593   10592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6354,"bootTime":1723453230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:24.165651   10592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:24.169833   10592 out.go:177] * [default-k8s-diff-port-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:24.177904   10592 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:24.177969   10592 notify.go:220] Checking for updates...
	I0812 03:46:24.184868   10592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:24.187881   10592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:24.190853   10592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:24.193865   10592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:24.196878   10592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:24.200086   10592 config.go:182] Loaded profile config "default-k8s-diff-port-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:24.200356   10592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:24.204801   10592 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:46:24.211833   10592 start.go:297] selected driver: qemu2
	I0812 03:46:24.211839   10592 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:24.211912   10592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:24.214082   10592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 03:46:24.214101   10592 cni.go:84] Creating CNI manager for ""
	I0812 03:46:24.214110   10592 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:24.214135   10592 start.go:340] cluster config:
	{Name:default-k8s-diff-port-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-188000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:24.217467   10592 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:24.224909   10592 out.go:177] * Starting "default-k8s-diff-port-188000" primary control-plane node in "default-k8s-diff-port-188000" cluster
	I0812 03:46:24.228859   10592 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:46:24.228873   10592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:46:24.228878   10592 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:24.228926   10592 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:24.228931   10592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 03:46:24.228985   10592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/default-k8s-diff-port-188000/config.json ...
	I0812 03:46:24.229475   10592 start.go:360] acquireMachinesLock for default-k8s-diff-port-188000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:24.229506   10592 start.go:364] duration metric: took 24.334µs to acquireMachinesLock for "default-k8s-diff-port-188000"
	I0812 03:46:24.229516   10592 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:24.229525   10592 fix.go:54] fixHost starting: 
	I0812 03:46:24.229646   10592 fix.go:112] recreateIfNeeded on default-k8s-diff-port-188000: state=Stopped err=<nil>
	W0812 03:46:24.229655   10592 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:24.231494   10592 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-188000" ...
	I0812 03:46:24.239861   10592 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:24.239906   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9f:3a:e5:9f:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:24.241905   10592 main.go:141] libmachine: STDOUT: 
	I0812 03:46:24.241927   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:24.241965   10592 fix.go:56] duration metric: took 12.442291ms for fixHost
	I0812 03:46:24.241970   10592 start.go:83] releasing machines lock for "default-k8s-diff-port-188000", held for 12.46ms
	W0812 03:46:24.241976   10592 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:24.242008   10592 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:24.242013   10592 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:29.244129   10592 start.go:360] acquireMachinesLock for default-k8s-diff-port-188000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:29.244571   10592 start.go:364] duration metric: took 344.917µs to acquireMachinesLock for "default-k8s-diff-port-188000"
	I0812 03:46:29.244683   10592 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:29.244706   10592 fix.go:54] fixHost starting: 
	I0812 03:46:29.245465   10592 fix.go:112] recreateIfNeeded on default-k8s-diff-port-188000: state=Stopped err=<nil>
	W0812 03:46:29.245492   10592 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:29.261991   10592 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-188000" ...
	I0812 03:46:29.267766   10592 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:29.268096   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9f:3a:e5:9f:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/default-k8s-diff-port-188000/disk.qcow2
	I0812 03:46:29.277196   10592 main.go:141] libmachine: STDOUT: 
	I0812 03:46:29.277260   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:29.277361   10592 fix.go:56] duration metric: took 32.659667ms for fixHost
	I0812 03:46:29.277382   10592 start.go:83] releasing machines lock for "default-k8s-diff-port-188000", held for 32.786375ms
	W0812 03:46:29.277552   10592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:29.284851   10592 out.go:177] 
	W0812 03:46:29.287919   10592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:29.287961   10592 out.go:239] * 
	* 
	W0812 03:46:29.290466   10592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:29.302842   10592 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-188000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (64.508167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-397000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (32.334583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-397000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-397000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-397000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.57075ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-397000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-397000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (28.777084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-397000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (29.168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-397000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-397000 --alsologtostderr -v=1: exit status 83 (38.274666ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-397000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-397000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:26.457379   10611 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:26.457536   10611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:26.457539   10611 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:26.457542   10611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:26.457677   10611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:26.457885   10611 out.go:298] Setting JSON to false
	I0812 03:46:26.457894   10611 mustload.go:65] Loading cluster: embed-certs-397000
	I0812 03:46:26.458101   10611 config.go:182] Loaded profile config "embed-certs-397000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:26.461415   10611 out.go:177] * The control-plane node embed-certs-397000 host is not running: state=Stopped
	I0812 03:46:26.465376   10611 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-397000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-397000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (27.922958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (28.067083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-397000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.818882584s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:26.764592   10628 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:26.764726   10628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:26.764730   10628 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:26.764732   10628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:26.764858   10628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:26.765931   10628 out.go:298] Setting JSON to false
	I0812 03:46:26.781866   10628 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6356,"bootTime":1723453230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:26.781955   10628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:26.786383   10628 out.go:177] * [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:26.793441   10628 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:26.793490   10628 notify.go:220] Checking for updates...
	I0812 03:46:26.800356   10628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:26.803392   10628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:26.806357   10628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:26.809376   10628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:26.812371   10628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:26.815683   10628 config.go:182] Loaded profile config "default-k8s-diff-port-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:26.815741   10628 config.go:182] Loaded profile config "multinode-552000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:26.815793   10628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:26.820388   10628 out.go:177] * Using the qemu2 driver based on user configuration
	I0812 03:46:26.827360   10628 start.go:297] selected driver: qemu2
	I0812 03:46:26.827367   10628 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:46:26.827376   10628 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:26.829675   10628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0812 03:46:26.829700   10628 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0812 03:46:26.837372   10628 out.go:177] * Automatically selected the socket_vmnet network
	I0812 03:46:26.840542   10628 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0812 03:46:26.840558   10628 cni.go:84] Creating CNI manager for ""
	I0812 03:46:26.840571   10628 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:26.840576   10628 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:46:26.840608   10628 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:26.844329   10628 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:26.852374   10628 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0812 03:46:26.856347   10628 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:46:26.856364   10628 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0812 03:46:26.856373   10628 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:26.856448   10628 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:26.856460   10628 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0812 03:46:26.856529   10628 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/newest-cni-529000/config.json ...
	I0812 03:46:26.856540   10628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/newest-cni-529000/config.json: {Name:mkdc5034f367fb92b9ad77b38d43cff6c28dc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:46:26.856966   10628 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:26.857002   10628 start.go:364] duration metric: took 29.459µs to acquireMachinesLock for "newest-cni-529000"
	I0812 03:46:26.857015   10628 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:26.857044   10628 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:26.865366   10628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:26.884222   10628 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0812 03:46:26.884257   10628 client.go:168] LocalClient.Create starting
	I0812 03:46:26.884325   10628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:26.884354   10628 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:26.884365   10628 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:26.884401   10628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:26.884424   10628 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:26.884431   10628 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:26.884888   10628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:27.039606   10628 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:27.087837   10628 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:27.087842   10628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:27.088025   10628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:27.097251   10628 main.go:141] libmachine: STDOUT: 
	I0812 03:46:27.097270   10628 main.go:141] libmachine: STDERR: 
	I0812 03:46:27.097314   10628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0812 03:46:27.105113   10628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:27.105130   10628 main.go:141] libmachine: STDERR: 
	I0812 03:46:27.105147   10628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:27.105153   10628 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:27.105169   10628 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:27.105198   10628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:45:5f:b9:b0:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:27.106895   10628 main.go:141] libmachine: STDOUT: 
	I0812 03:46:27.106915   10628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:27.106935   10628 client.go:171] duration metric: took 222.676459ms to LocalClient.Create
	I0812 03:46:29.109096   10628 start.go:128] duration metric: took 2.252061375s to createHost
	I0812 03:46:29.109152   10628 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.252170541s
	W0812 03:46:29.109224   10628 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:29.120521   10628 out.go:177] * Deleting "newest-cni-529000" in qemu2 ...
	W0812 03:46:29.156630   10628 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:29.156660   10628 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:34.158772   10628 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:34.159330   10628 start.go:364] duration metric: took 465.916µs to acquireMachinesLock for "newest-cni-529000"
	I0812 03:46:34.159460   10628 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 03:46:34.159741   10628 start.go:125] createHost starting for "" (driver="qemu2")
	I0812 03:46:34.164445   10628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 03:46:34.215320   10628 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0812 03:46:34.215369   10628 client.go:168] LocalClient.Create starting
	I0812 03:46:34.215496   10628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/ca.pem
	I0812 03:46:34.215583   10628 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:34.215604   10628 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:34.215667   10628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19409-6342/.minikube/certs/cert.pem
	I0812 03:46:34.215712   10628 main.go:141] libmachine: Decoding PEM data...
	I0812 03:46:34.215723   10628 main.go:141] libmachine: Parsing certificate...
	I0812 03:46:34.216778   10628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0812 03:46:34.378811   10628 main.go:141] libmachine: Creating SSH key...
	I0812 03:46:34.492956   10628 main.go:141] libmachine: Creating Disk image...
	I0812 03:46:34.492963   10628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0812 03:46:34.493169   10628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:34.502310   10628 main.go:141] libmachine: STDOUT: 
	I0812 03:46:34.502329   10628 main.go:141] libmachine: STDERR: 
	I0812 03:46:34.502376   10628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0812 03:46:34.510368   10628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0812 03:46:34.510381   10628 main.go:141] libmachine: STDERR: 
	I0812 03:46:34.510401   10628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:34.510404   10628 main.go:141] libmachine: Starting QEMU VM...
	I0812 03:46:34.510414   10628 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:34.510443   10628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:93:ac:04:1d:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:34.512029   10628 main.go:141] libmachine: STDOUT: 
	I0812 03:46:34.512041   10628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:34.512058   10628 client.go:171] duration metric: took 296.687708ms to LocalClient.Create
	I0812 03:46:36.514207   10628 start.go:128] duration metric: took 2.354437709s to createHost
	I0812 03:46:36.514274   10628 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.354946125s
	W0812 03:46:36.514637   10628 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:36.524505   10628 out.go:177] 
	W0812 03:46:36.532682   10628 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:36.532707   10628 out.go:239] * 
	* 
	W0812 03:46:36.535088   10628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:36.547412   10628 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (65.957167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-188000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (32.5625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-188000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.725292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (28.307416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-188000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (28.7115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-188000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-188000 --alsologtostderr -v=1: exit status 83 (39.86525ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-188000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-188000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:29.565004   10650 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:29.565162   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:29.565165   10650 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:29.565167   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:29.565304   10650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:29.565516   10650 out.go:298] Setting JSON to false
	I0812 03:46:29.565523   10650 mustload.go:65] Loading cluster: default-k8s-diff-port-188000
	I0812 03:46:29.565729   10650 config.go:182] Loaded profile config "default-k8s-diff-port-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:46:29.569850   10650 out.go:177] * The control-plane node default-k8s-diff-port-188000 host is not running: state=Stopped
	I0812 03:46:29.572831   10650 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-188000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-188000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (28.475083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (27.943042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.184669125s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:40.076988   10697 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:40.077126   10697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:40.077130   10697 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:40.077132   10697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:40.077274   10697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:40.078297   10697 out.go:298] Setting JSON to false
	I0812 03:46:40.094376   10697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6370,"bootTime":1723453230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:46:40.094442   10697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:46:40.099053   10697 out.go:177] * [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:46:40.107045   10697 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:46:40.107143   10697 notify.go:220] Checking for updates...
	I0812 03:46:40.114086   10697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:46:40.117026   10697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:46:40.119997   10697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:46:40.123004   10697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:46:40.125929   10697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:46:40.129232   10697 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0812 03:46:40.129529   10697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:46:40.133994   10697 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:46:40.141030   10697 start.go:297] selected driver: qemu2
	I0812 03:46:40.141036   10697 start.go:901] validating driver "qemu2" against &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:40.141101   10697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:46:40.143427   10697 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0812 03:46:40.143453   10697 cni.go:84] Creating CNI manager for ""
	I0812 03:46:40.143460   10697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:46:40.143492   10697 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-529000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:46:40.147218   10697 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:46:40.155030   10697 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0812 03:46:40.159060   10697 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:46:40.159073   10697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0812 03:46:40.159078   10697 cache.go:56] Caching tarball of preloaded images
	I0812 03:46:40.159128   10697 preload.go:172] Found /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0812 03:46:40.159132   10697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0812 03:46:40.159188   10697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/newest-cni-529000/config.json ...
	I0812 03:46:40.159654   10697 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:40.159681   10697 start.go:364] duration metric: took 21.417µs to acquireMachinesLock for "newest-cni-529000"
	I0812 03:46:40.159691   10697 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:40.159697   10697 fix.go:54] fixHost starting: 
	I0812 03:46:40.159813   10697 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0812 03:46:40.159823   10697 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:40.164003   10697 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0812 03:46:40.171860   10697 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:40.171894   10697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:93:ac:04:1d:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:40.173814   10697 main.go:141] libmachine: STDOUT: 
	I0812 03:46:40.173835   10697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:40.173864   10697 fix.go:56] duration metric: took 14.167958ms for fixHost
	I0812 03:46:40.173868   10697 start.go:83] releasing machines lock for "newest-cni-529000", held for 14.182292ms
	W0812 03:46:40.173874   10697 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:40.173915   10697 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:40.173925   10697 start.go:729] Will try again in 5 seconds ...
	I0812 03:46:45.176032   10697 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk8e67b24bc4bbffe83ca1796c00665e69ecbe77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 03:46:45.176489   10697 start.go:364] duration metric: took 375.125µs to acquireMachinesLock for "newest-cni-529000"
	I0812 03:46:45.176656   10697 start.go:96] Skipping create...Using existing machine configuration
	I0812 03:46:45.176679   10697 fix.go:54] fixHost starting: 
	I0812 03:46:45.177412   10697 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0812 03:46:45.177441   10697 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 03:46:45.186085   10697 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0812 03:46:45.189056   10697 qemu.go:418] Using hvf for hardware acceleration
	I0812 03:46:45.189221   10697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:93:ac:04:1d:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19409-6342/.minikube/machines/newest-cni-529000/disk.qcow2
	I0812 03:46:45.199113   10697 main.go:141] libmachine: STDOUT: 
	I0812 03:46:45.199197   10697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0812 03:46:45.199305   10697 fix.go:56] duration metric: took 22.629458ms for fixHost
	I0812 03:46:45.199330   10697 start.go:83] releasing machines lock for "newest-cni-529000", held for 22.814375ms
	W0812 03:46:45.199606   10697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0812 03:46:45.207120   10697 out.go:177] 
	W0812 03:46:45.211149   10697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0812 03:46:45.211180   10697 out.go:239] * 
	* 
	W0812 03:46:45.213969   10697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:46:45.222071   10697 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (67.412458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-529000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (28.733667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1: exit status 83 (40.737167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:46:45.403491   10711 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:46:45.403674   10711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:45.403677   10711 out.go:304] Setting ErrFile to fd 2...
	I0812 03:46:45.403680   10711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:46:45.403822   10711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:46:45.404057   10711 out.go:298] Setting JSON to false
	I0812 03:46:45.404065   10711 mustload.go:65] Loading cluster: newest-cni-529000
	I0812 03:46:45.404244   10711 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0812 03:46:45.407499   10711 out.go:177] * The control-plane node newest-cni-529000 host is not running: state=Stopped
	I0812 03:46:45.411428   10711 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (29.934959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (29.071875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 9.76
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 9.66
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.18
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 5.69
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.75
64 TestFunctional/serial/CacheCmd/cache/add_local 1.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.21
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.12
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.27
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.72
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.07
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.36
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 1.39
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.18
267 TestNoKubernetes/serial/Stop 3.65
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
284 TestStartStop/group/old-k8s-version/serial/Stop 2.15
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.49
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/embed-certs/serial/Stop 4.08
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.82
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.24
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-858000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-858000: exit status 85 (91.740667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |          |
	|         | -p download-only-858000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 03:19:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 03:19:15.899140    6843 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:19:15.899316    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:15.899320    6843 out.go:304] Setting ErrFile to fd 2...
	I0812 03:19:15.899322    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:15.899480    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	W0812 03:19:15.899574    6843 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19409-6342/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19409-6342/.minikube/config/config.json: no such file or directory
	I0812 03:19:15.900836    6843 out.go:298] Setting JSON to true
	I0812 03:19:15.919728    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4725,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:19:15.919797    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:19:15.926377    6843 out.go:97] [download-only-858000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:19:15.926498    6843 notify.go:220] Checking for updates...
	W0812 03:19:15.926551    6843 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball: no such file or directory
	I0812 03:19:15.930458    6843 out.go:169] MINIKUBE_LOCATION=19409
	I0812 03:19:15.933886    6843 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:19:15.939751    6843 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:19:15.942823    6843 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:19:15.946765    6843 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	W0812 03:19:15.952456    6843 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 03:19:15.952633    6843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:19:15.956369    6843 out.go:97] Using the qemu2 driver based on user configuration
	I0812 03:19:15.956386    6843 start.go:297] selected driver: qemu2
	I0812 03:19:15.956399    6843 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:19:15.956470    6843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:19:15.960304    6843 out.go:169] Automatically selected the socket_vmnet network
	I0812 03:19:15.966239    6843 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0812 03:19:15.966338    6843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:19:15.966397    6843 cni.go:84] Creating CNI manager for ""
	I0812 03:19:15.966414    6843 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 03:19:15.966467    6843 start.go:340] cluster config:
	{Name:download-only-858000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-858000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:19:15.970177    6843 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:19:15.974397    6843 out.go:97] Downloading VM boot image ...
	I0812 03:19:15.974423    6843 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0812 03:19:22.236988    6843 out.go:97] Starting "download-only-858000" primary control-plane node in "download-only-858000" cluster
	I0812 03:19:22.237031    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:22.292398    6843 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:22.292416    6843 cache.go:56] Caching tarball of preloaded images
	I0812 03:19:22.293203    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:22.297587    6843 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0812 03:19:22.297593    6843 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:22.371176    6843 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:29.637345    6843 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:29.637515    6843 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:30.333514    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0812 03:19:30.333713    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/download-only-858000/config.json ...
	I0812 03:19:30.333730    6843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19409-6342/.minikube/profiles/download-only-858000/config.json: {Name:mk6762fe2e2f4c26319b8a4a357a4ba0c4bb833b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 03:19:30.333961    6843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 03:19:30.334159    6843 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0812 03:19:30.850864    6843 out.go:169] 
	W0812 03:19:30.857887    6843 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19409-6342/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40 0x108c4dd40] Decompressors:map[bz2:0x1400000e250 gz:0x1400000e258 tar:0x1400000e1c0 tar.bz2:0x1400000e200 tar.gz:0x1400000e210 tar.xz:0x1400000e220 tar.zst:0x1400000e230 tbz2:0x1400000e200 tgz:0x1400000e210 txz:0x1400000e220 tzst:0x1400000e230 xz:0x1400000e260 zip:0x1400000e270 zst:0x1400000e268] Getters:map[file:0x140014205c0 http:0x1400086e500 https:0x1400086e550] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0812 03:19:30.857915    6843 out_reason.go:110] 
	W0812 03:19:30.866816    6843 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 03:19:30.870826    6843 out.go:169] 
	
	
	* The control-plane node download-only-858000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-858000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-858000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (9.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-681000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-681000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (9.764575167s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (9.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-681000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-681000: exit status 85 (81.957583ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-858000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-858000        | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-681000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 03:19:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 03:19:31.283167    6867 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:19:31.283296    6867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:31.283300    6867 out.go:304] Setting ErrFile to fd 2...
	I0812 03:19:31.283302    6867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:31.283424    6867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:19:31.284457    6867 out.go:298] Setting JSON to true
	I0812 03:19:31.300522    6867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4741,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:19:31.300605    6867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:19:31.304813    6867 out.go:97] [download-only-681000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:19:31.304934    6867 notify.go:220] Checking for updates...
	I0812 03:19:31.308793    6867 out.go:169] MINIKUBE_LOCATION=19409
	I0812 03:19:31.311815    6867 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:19:31.315765    6867 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:19:31.318818    6867 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:19:31.321897    6867 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	W0812 03:19:31.327804    6867 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 03:19:31.327961    6867 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:19:31.330768    6867 out.go:97] Using the qemu2 driver based on user configuration
	I0812 03:19:31.330777    6867 start.go:297] selected driver: qemu2
	I0812 03:19:31.330781    6867 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:19:31.330827    6867 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:19:31.333770    6867 out.go:169] Automatically selected the socket_vmnet network
	I0812 03:19:31.338895    6867 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0812 03:19:31.338971    6867 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:19:31.338989    6867 cni.go:84] Creating CNI manager for ""
	I0812 03:19:31.338998    6867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:19:31.339003    6867 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:19:31.339038    6867 start.go:340] cluster config:
	{Name:download-only-681000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:19:31.342444    6867 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:19:31.345829    6867 out.go:97] Starting "download-only-681000" primary control-plane node in "download-only-681000" cluster
	I0812 03:19:31.345836    6867 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:19:31.402919    6867 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0812 03:19:31.402930    6867 cache.go:56] Caching tarball of preloaded images
	I0812 03:19:31.403107    6867 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 03:19:31.408165    6867 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0812 03:19:31.408173    6867 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:31.484235    6867 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-681000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-681000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-681000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-833000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-833000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (9.657612708s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-833000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-833000: exit status 85 (75.211084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-858000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-858000           | download-only-858000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | -o=json --download-only           | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-681000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| delete  | -p download-only-681000           | download-only-681000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT | 12 Aug 24 03:19 PDT |
	| start   | -o=json --download-only           | download-only-833000 | jenkins | v1.33.1 | 12 Aug 24 03:19 PDT |                     |
	|         | -p download-only-833000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 03:19:41
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 03:19:41.335339    6893 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:19:41.335463    6893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:41.335466    6893 out.go:304] Setting ErrFile to fd 2...
	I0812 03:19:41.335469    6893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:19:41.335593    6893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:19:41.336667    6893 out.go:298] Setting JSON to true
	I0812 03:19:41.352435    6893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4751,"bootTime":1723453230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:19:41.352502    6893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:19:41.355890    6893 out.go:97] [download-only-833000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:19:41.355987    6893 notify.go:220] Checking for updates...
	I0812 03:19:41.359840    6893 out.go:169] MINIKUBE_LOCATION=19409
	I0812 03:19:41.362746    6893 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:19:41.366857    6893 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:19:41.369852    6893 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:19:41.372867    6893 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	W0812 03:19:41.378860    6893 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 03:19:41.379069    6893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:19:41.380446    6893 out.go:97] Using the qemu2 driver based on user configuration
	I0812 03:19:41.380454    6893 start.go:297] selected driver: qemu2
	I0812 03:19:41.380458    6893 start.go:901] validating driver "qemu2" against <nil>
	I0812 03:19:41.380500    6893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 03:19:41.383825    6893 out.go:169] Automatically selected the socket_vmnet network
	I0812 03:19:41.388987    6893 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0812 03:19:41.389069    6893 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 03:19:41.389086    6893 cni.go:84] Creating CNI manager for ""
	I0812 03:19:41.389109    6893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 03:19:41.389115    6893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 03:19:41.389158    6893 start.go:340] cluster config:
	{Name:download-only-833000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-833000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:19:41.392526    6893 iso.go:125] acquiring lock: {Name:mkdeac3198922a916c8c5d90b10163cab5757362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 03:19:41.395884    6893 out.go:97] Starting "download-only-833000" primary control-plane node in "download-only-833000" cluster
	I0812 03:19:41.395893    6893 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:19:41.449502    6893 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:41.449521    6893 cache.go:56] Caching tarball of preloaded images
	I0812 03:19:41.449672    6893 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 03:19:41.454882    6893 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0812 03:19:41.454890    6893 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:41.529600    6893 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0812 03:19:46.284000    6893 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0812 03:19:46.284266    6893 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19409-6342/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-833000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-833000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-833000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-249000 --alsologtostderr --binary-mirror http://127.0.0.1:51037 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-249000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-249000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-717000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-717000: exit status 85 (54.304875ms)

                                                
                                                
-- stdout --
	* Profile "addons-717000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-717000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-717000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-717000: exit status 85 (58.208458ms)

                                                
                                                
-- stdout --
	* Profile "addons-717000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-717000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.18s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status: exit status 7 (31.035125ms)

                                                
                                                
-- stdout --
	nospam-338000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status: exit status 7 (29.730042ms)

                                                
                                                
-- stdout --
	nospam-338000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status: exit status 7 (28.846666ms)

                                                
                                                
-- stdout --
	nospam-338000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause: exit status 83 (39.515083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause: exit status 83 (37.421625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause: exit status 83 (39.181209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause: exit status 83 (38.802917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause: exit status 83 (38.038167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause: exit status 83 (39.693916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-338000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop: (1.864752709s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop: (2.061878375s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-338000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-338000 stop: (1.757442125s)
--- PASS: TestErrorSpam/stop (5.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19409-6342/.minikube/files/etc/test/nested/copy/6841/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local390483428/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache add minikube-local-cache-test:functional-369000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 cache delete minikube-local-cache-test:functional-369000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-369000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 config get cpus: exit status 14 (29.924125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 config get cpus: exit status 14 (32.415958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-369000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.292ms)

                                                
                                                
-- stdout --
	* [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:21:29.913283    7443 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:21:29.913482    7443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:29.913487    7443 out.go:304] Setting ErrFile to fd 2...
	I0812 03:21:29.913490    7443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:29.913678    7443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:21:29.915048    7443 out.go:298] Setting JSON to false
	I0812 03:21:29.934851    7443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4859,"bootTime":1723453230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:21:29.934925    7443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:21:29.938998    7443 out.go:177] * [functional-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0812 03:21:29.946998    7443 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:21:29.947049    7443 notify.go:220] Checking for updates...
	I0812 03:21:29.952954    7443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:21:29.955984    7443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:21:29.958997    7443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:21:29.961955    7443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:21:29.964981    7443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:21:29.968173    7443 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:21:29.968443    7443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:21:29.972948    7443 out.go:177] * Using the qemu2 driver based on existing profile
	I0812 03:21:29.978857    7443 start.go:297] selected driver: qemu2
	I0812 03:21:29.978865    7443 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:21:29.978909    7443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:21:29.985917    7443 out.go:177] 
	W0812 03:21:29.989976    7443 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0812 03:21:29.992938    7443 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-369000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-369000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.786209ms)

                                                
                                                
-- stdout --
	* [functional-369000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 03:21:30.139715    7454 out.go:291] Setting OutFile to fd 1 ...
	I0812 03:21:30.139823    7454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.139827    7454 out.go:304] Setting ErrFile to fd 2...
	I0812 03:21:30.139830    7454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 03:21:30.139972    7454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19409-6342/.minikube/bin
	I0812 03:21:30.141346    7454 out.go:298] Setting JSON to false
	I0812 03:21:30.157997    7454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4860,"bootTime":1723453230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0812 03:21:30.158078    7454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0812 03:21:30.162973    7454 out.go:177] * [functional-369000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0812 03:21:30.170036    7454 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 03:21:30.170074    7454 notify.go:220] Checking for updates...
	I0812 03:21:30.176961    7454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	I0812 03:21:30.179978    7454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0812 03:21:30.182977    7454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 03:21:30.185961    7454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	I0812 03:21:30.193097    7454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 03:21:30.196230    7454 config.go:182] Loaded profile config "functional-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 03:21:30.196500    7454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 03:21:30.199869    7454 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0812 03:21:30.206896    7454 start.go:297] selected driver: qemu2
	I0812 03:21:30.206902    7454 start.go:901] validating driver "qemu2" against &{Name:functional-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 03:21:30.206950    7454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 03:21:30.213931    7454 out.go:177] 
	W0812 03:21:30.217992    7454 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0812 03:21:30.221004    7454 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.690874166s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-369000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image rm kicbase/echo-server:functional-369000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-369000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 image save --daemon kicbase/echo-server:functional-369000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-369000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "42.975208ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "31.828417ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "42.422666ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.270834ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011964125s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-369000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-369000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-369000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-369000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-471000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-471000 --output=json --user=testUser: (3.355330417s)
--- PASS: TestJSONOutput/stop/Command (3.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-230000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-230000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.97075ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f821654-256d-4f3c-a0c0-3c9bfdc69171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-230000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb93cf56-080c-4968-9018-5a04b2a37570","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19409"}}
	{"specversion":"1.0","id":"82da0e03-318e-45e4-97d2-1514e58bcfa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig"}}
	{"specversion":"1.0","id":"2e9bf4db-aa8b-4ec3-8b5d-4e3c5fb6becb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a70d0cc-6930-40a7-82c1-b0aba88a4806","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"981ed85f-710c-44dc-beb1-36693d95a388","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube"}}
	{"specversion":"1.0","id":"330e88fe-ce25-4c19-95b6-8177392ccdaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68e8b0b3-262b-4c89-b81f-2c28f223eeb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-230000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-971000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.678917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19409
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19409-6342/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19409-6342/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-971000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-971000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.326792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-971000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.622079417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.561246959s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-971000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-971000: (3.644907333s)
--- PASS: TestNoKubernetes/serial/Stop (3.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-971000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-971000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.745584ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-971000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-743000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-061000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-061000 --alsologtostderr -v=3: (2.146038375s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-061000 -n old-k8s-version-061000: exit status 7 (50.630083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-061000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-120000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-120000 --alsologtostderr -v=3: (3.48636775s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-120000 -n no-preload-120000: exit status 7 (49.436542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-120000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-397000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-397000 --alsologtostderr -v=3: (4.083734042s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (4.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-397000 -n embed-certs-397000: exit status 7 (51.364167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-397000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-188000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-188000 --alsologtostderr -v=3: (2.823501458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-188000 -n default-k8s-diff-port-188000: exit status 7 (55.348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-188000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-529000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3: (3.24212075s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (57.111292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-529000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2282657127/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723458048758961000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2282657127/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723458048758961000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2282657127/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723458048758961000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2282657127/001/test-1723458048758961000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (49.442834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.655875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.288417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.959833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.094542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.839125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.69475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo umount -f /mount-9p": exit status 83 (43.821917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2282657127/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2252781419/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.131958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.328625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.571334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.98275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.309334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (76.981167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.693417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.0965ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "sudo umount -f /mount-9p": exit status 83 (45.545625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-369000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2252781419/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (83.96525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (87.140459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (85.662459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (85.048125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (87.418667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (84.953875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-369000 ssh "findmnt -T" /mount1: exit status 83 (85.18875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-369000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-369000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup886954150/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.25s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-487000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-487000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-487000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-487000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487000"

                                                
                                                
----------------------- debugLogs end: cilium-487000 [took: 2.166791792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-487000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-364000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-364000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard