Test Report: QEMU_macOS 19124

                    
                      b47018a41c76a7aa401be8ce52e856258110c967:2024-06-24:35020
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.2
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.12
27 TestAddons/Setup 10.26
28 TestCertOptions 12.06
29 TestCertExpiration 198.34
30 TestDockerFlags 10.36
31 TestForceSystemdFlag 12.52
32 TestForceSystemdEnv 10.09
38 TestErrorSpam/setup 9.83
47 TestFunctional/serial/StartWithProxy 9.93
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.63
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 87.78
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.17
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.7
141 TestMultiControlPlane/serial/StartCluster 10.07
142 TestMultiControlPlane/serial/DeployApp 95.67
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 52.32
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.86
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.24
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.97
165 TestJSONOutput/start/Command 9.72
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 10.17
197 TestMountStart/serial/StartWithMountFirst 10
200 TestMultiNode/serial/FreshStart2Nodes 9.96
201 TestMultiNode/serial/DeployApp2Nodes 115.94
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 54.76
209 TestMultiNode/serial/RestartKeepsNodes 8.94
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.92
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.32
217 TestPreload 10.12
219 TestScheduledStopUnix 10.24
220 TestSkaffold 12.16
223 TestRunningBinaryUpgrade 615.45
225 TestKubernetesUpgrade 19.26
229 TestNoKubernetes/serial/StartWithK8s 12.17
230 TestNoKubernetes/serial/StartWithStopK8s 7.44
231 TestNoKubernetes/serial/Start 7.39
236 TestStoppedBinaryUpgrade/Upgrade 575.27
237 TestNoKubernetes/serial/StartNoArgs 5.34
248 TestPause/serial/Start 9.95
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.43
261 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.38
263 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
264 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
268 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
269 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
270 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
271 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
272 TestStartStop/group/old-k8s-version/serial/Pause 0.1
274 TestStartStop/group/no-preload/serial/FirstStart 10.06
275 TestStartStop/group/no-preload/serial/DeployApp 0.09
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
279 TestStartStop/group/no-preload/serial/SecondStart 5.26
280 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
282 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
283 TestStartStop/group/no-preload/serial/Pause 0.1
285 TestStartStop/group/embed-certs/serial/FirstStart 9.89
286 TestStartStop/group/embed-certs/serial/DeployApp 0.09
287 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
290 TestStartStop/group/embed-certs/serial/SecondStart 5.26
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/embed-certs/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.91
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
302 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 10.05
312 TestStartStop/group/newest-cni/serial/SecondStart 5.25
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/newest-cni/serial/Pause 0.1
317 TestNetworkPlugins/group/auto/Start 9.88
318 TestNetworkPlugins/group/kindnet/Start 10.05
319 TestNetworkPlugins/group/calico/Start 9.83
320 TestNetworkPlugins/group/custom-flannel/Start 9.88
321 TestNetworkPlugins/group/false/Start 9.93
322 TestNetworkPlugins/group/enable-default-cni/Start 9.83
323 TestNetworkPlugins/group/flannel/Start 9.78
324 TestNetworkPlugins/group/bridge/Start 9.94
325 TestNetworkPlugins/group/kubenet/Start 10.09
x
+
TestDownloadOnly/v1.20.0/json-events (12.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-954000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-954000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.196207625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d3616b1-f6ab-4111-acc9-3ef2886b50e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-954000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c869546-1138-451b-baf3-21d73e4aca88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19124"}}
	{"specversion":"1.0","id":"786db7a9-8dd7-47ef-9f77-e42265c19254","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig"}}
	{"specversion":"1.0","id":"5d27e800-796b-4fc3-9a16-9e26b0fb959d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"14ef64fd-0f85-4744-9c0c-e9e797e69590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ae7d43c-198d-4b83-8cdd-7078e8e4240c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube"}}
	{"specversion":"1.0","id":"9241d3ca-ecd4-4b84-b92e-9d0ab9ce9797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"40f038fb-8aaa-48a9-a6cf-3a337e711156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18661c98-990c-4444-9559-d21accd1d83a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f7261aa9-f406-4cac-98d2-299ad8159e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a166b170-ff68-453d-8abd-abb581bf5dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-954000\" primary control-plane node in \"download-only-954000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3e0b3bd-1184-4bd7-9be4-288548876f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dc4785d-41e7-41ac-8419-e7cc562a68b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980] Decompressors:map[bz2:0x14000809420 gz:0x14000809428 tar:0x140008093d0 tar.bz2:0x140008093e0 tar.gz:0x140008093f0 tar.xz:0x14000809400 tar.zst:0x14000809410 tbz2:0x140008093e0 tgz:0x14
0008093f0 txz:0x14000809400 tzst:0x14000809410 xz:0x14000809430 zip:0x14000809440 zst:0x14000809438] Getters:map[file:0x14001720570 http:0x14000462550 https:0x140004625a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"8018714c-3170-4953-9e9b-755f77b029e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:18:45.628580    5138 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:18:45.628728    5138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:45.628732    5138 out.go:304] Setting ErrFile to fd 2...
	I0624 03:18:45.628734    5138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:45.628875    5138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	W0624 03:18:45.628953    5138 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19124-4612/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19124-4612/.minikube/config/config.json: no such file or directory
	I0624 03:18:45.630286    5138 out.go:298] Setting JSON to true
	I0624 03:18:45.649071    5138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4695,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:18:45.649148    5138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:18:45.665016    5138 out.go:97] [download-only-954000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:18:45.665128    5138 notify.go:220] Checking for updates...
	W0624 03:18:45.665224    5138 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball: no such file or directory
	I0624 03:18:45.669782    5138 out.go:169] MINIKUBE_LOCATION=19124
	I0624 03:18:45.677251    5138 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:18:45.703456    5138 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:18:45.706446    5138 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:18:45.707560    5138 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	W0624 03:18:45.716383    5138 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0624 03:18:45.716612    5138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:18:45.717828    5138 out.go:97] Using the qemu2 driver based on user configuration
	I0624 03:18:45.717854    5138 start.go:297] selected driver: qemu2
	I0624 03:18:45.717878    5138 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:18:45.717964    5138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:18:45.720347    5138 out.go:169] Automatically selected the socket_vmnet network
	I0624 03:18:45.728049    5138 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0624 03:18:45.728153    5138 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:18:45.728269    5138 cni.go:84] Creating CNI manager for ""
	I0624 03:18:45.728290    5138 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:18:45.728363    5138 start.go:340] cluster config:
	{Name:download-only-954000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:18:45.733858    5138 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:18:45.737309    5138 out.go:97] Downloading VM boot image ...
	I0624 03:18:45.737345    5138 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso
	I0624 03:18:50.522381    5138 out.go:97] Starting "download-only-954000" primary control-plane node in "download-only-954000" cluster
	I0624 03:18:50.522401    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:50.573046    5138 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:18:50.573052    5138 cache.go:56] Caching tarball of preloaded images
	I0624 03:18:50.573394    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:50.578376    5138 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0624 03:18:50.578382    5138 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:50.654315    5138 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:18:56.652059    5138 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:56.652209    5138 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:57.355869    5138 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:18:57.356072    5138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/download-only-954000/config.json ...
	I0624 03:18:57.356089    5138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/download-only-954000/config.json: {Name:mkfb538539f791a6e1396e0e1b122bd007f20dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:18:57.356684    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:57.356882    5138 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0624 03:18:57.746243    5138 out.go:169] 
	W0624 03:18:57.754604    5138 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980] Decompressors:map[bz2:0x14000809420 gz:0x14000809428 tar:0x140008093d0 tar.bz2:0x140008093e0 tar.gz:0x140008093f0 tar.xz:0x14000809400 tar.zst:0x14000809410 tbz2:0x140008093e0 tgz:0x140008093f0 txz:0x14000809400 tzst:0x14000809410 xz:0x14000809430 zip:0x14000809440 zst:0x14000809438] Getters:map[file:0x14001720570 http:0x14000462550 https:0x140004625a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0624 03:18:57.754654    5138 out_reason.go:110] 
	W0624 03:18:57.761366    5138 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:18:57.764996    5138 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-954000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.919739333s)

                                                
                                                
-- stdout --
	* [offline-docker-953000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-953000" primary control-plane node in "offline-docker-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:30:30.113964    6697 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:30:30.114092    6697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:30.114096    6697 out.go:304] Setting ErrFile to fd 2...
	I0624 03:30:30.114098    6697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:30.114228    6697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:30:30.115409    6697 out.go:298] Setting JSON to false
	I0624 03:30:30.135623    6697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5400,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:30:30.135680    6697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:30:30.140850    6697 out.go:177] * [offline-docker-953000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:30:30.149922    6697 notify.go:220] Checking for updates...
	I0624 03:30:30.155312    6697 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:30:30.162880    6697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:30:30.169864    6697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:30:30.175752    6697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:30:30.181842    6697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:30:30.185836    6697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:30:30.189189    6697 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:30:30.189256    6697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:30:30.200962    6697 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:30:30.208861    6697 start.go:297] selected driver: qemu2
	I0624 03:30:30.208868    6697 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:30:30.208875    6697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:30:30.211184    6697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:30:30.216858    6697 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:30:30.227974    6697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:30:30.227994    6697 cni.go:84] Creating CNI manager for ""
	I0624 03:30:30.228005    6697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:30:30.228010    6697 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:30:30.228049    6697 start.go:340] cluster config:
	{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:30:30.233614    6697 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:30:30.236812    6697 out.go:177] * Starting "offline-docker-953000" primary control-plane node in "offline-docker-953000" cluster
	I0624 03:30:30.244862    6697 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:30:30.244898    6697 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:30:30.244905    6697 cache.go:56] Caching tarball of preloaded images
	I0624 03:30:30.244990    6697 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:30:30.244997    6697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:30:30.245066    6697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/offline-docker-953000/config.json ...
	I0624 03:30:30.245079    6697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/offline-docker-953000/config.json: {Name:mkc9d52ed27dc00d427152c9244e93ba54b802a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:30:30.245361    6697 start.go:360] acquireMachinesLock for offline-docker-953000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:30.245405    6697 start.go:364] duration metric: took 34.083µs to acquireMachinesLock for "offline-docker-953000"
	I0624 03:30:30.245417    6697 start.go:93] Provisioning new machine with config: &{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:30:30.245461    6697 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:30:30.249846    6697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:30:30.265768    6697 start.go:159] libmachine.API.Create for "offline-docker-953000" (driver="qemu2")
	I0624 03:30:30.265806    6697 client.go:168] LocalClient.Create starting
	I0624 03:30:30.265860    6697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:30:30.265888    6697 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:30.265898    6697 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:30.265943    6697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:30:30.265966    6697 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:30.265973    6697 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:30.266330    6697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:30:30.494534    6697 main.go:141] libmachine: Creating SSH key...
	I0624 03:30:30.540761    6697 main.go:141] libmachine: Creating Disk image...
	I0624 03:30:30.540766    6697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:30:30.540966    6697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:30.553447    6697 main.go:141] libmachine: STDOUT: 
	I0624 03:30:30.553471    6697 main.go:141] libmachine: STDERR: 
	I0624 03:30:30.553519    6697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2 +20000M
	I0624 03:30:30.564156    6697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:30:30.564175    6697 main.go:141] libmachine: STDERR: 
	I0624 03:30:30.564194    6697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:30.564197    6697 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:30:30.564244    6697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:6f:dc:b8:66:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:30.565785    6697 main.go:141] libmachine: STDOUT: 
	I0624 03:30:30.565804    6697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:30.565824    6697 client.go:171] duration metric: took 300.014625ms to LocalClient.Create
	I0624 03:30:32.567983    6697 start.go:128] duration metric: took 2.322520417s to createHost
	I0624 03:30:32.568138    6697 start.go:83] releasing machines lock for "offline-docker-953000", held for 2.322743s
	W0624 03:30:32.568185    6697 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:32.581489    6697 out.go:177] * Deleting "offline-docker-953000" in qemu2 ...
	W0624 03:30:32.610045    6697 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:32.610066    6697 start.go:728] Will try again in 5 seconds ...
	I0624 03:30:37.610994    6697 start.go:360] acquireMachinesLock for offline-docker-953000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:37.611619    6697 start.go:364] duration metric: took 533.875µs to acquireMachinesLock for "offline-docker-953000"
	I0624 03:30:37.611778    6697 start.go:93] Provisioning new machine with config: &{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:30:37.612033    6697 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:30:37.623660    6697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:30:37.672500    6697 start.go:159] libmachine.API.Create for "offline-docker-953000" (driver="qemu2")
	I0624 03:30:37.672559    6697 client.go:168] LocalClient.Create starting
	I0624 03:30:37.672679    6697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:30:37.672790    6697 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:37.672807    6697 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:37.672869    6697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:30:37.672913    6697 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:37.672926    6697 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:37.673406    6697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:30:37.846710    6697 main.go:141] libmachine: Creating SSH key...
	I0624 03:30:37.928566    6697 main.go:141] libmachine: Creating Disk image...
	I0624 03:30:37.928575    6697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:30:37.928790    6697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:37.941029    6697 main.go:141] libmachine: STDOUT: 
	I0624 03:30:37.941056    6697 main.go:141] libmachine: STDERR: 
	I0624 03:30:37.941107    6697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2 +20000M
	I0624 03:30:37.951724    6697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:30:37.951740    6697 main.go:141] libmachine: STDERR: 
	I0624 03:30:37.951749    6697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:37.951755    6697 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:30:37.951796    6697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a7:c1:f4:f1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/offline-docker-953000/disk.qcow2
	I0624 03:30:37.953334    6697 main.go:141] libmachine: STDOUT: 
	I0624 03:30:37.953350    6697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:37.953362    6697 client.go:171] duration metric: took 280.800875ms to LocalClient.Create
	I0624 03:30:39.955516    6697 start.go:128] duration metric: took 2.343476s to createHost
	I0624 03:30:39.955590    6697 start.go:83] releasing machines lock for "offline-docker-953000", held for 2.3439665s
	W0624 03:30:39.955929    6697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:39.968526    6697 out.go:177] 
	W0624 03:30:39.975617    6697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:30:39.975667    6697 out.go:239] * 
	* 
	W0624 03:30:39.978266    6697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:30:39.992380    6697 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-06-24 03:30:40.004825 -0700 PDT m=+714.474341710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-953000 -n offline-docker-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-953000 -n offline-docker-953000: exit status 7 (64.489708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-953000
--- FAIL: TestOffline (10.12s)

                                                
                                    
x
+
TestAddons/Setup (10.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-495000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-495000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.254162583s)

                                                
                                                
-- stdout --
	* [addons-495000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-495000" primary control-plane node in "addons-495000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:19:09.311252    5247 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:19:09.311385    5247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:09.311388    5247 out.go:304] Setting ErrFile to fd 2...
	I0624 03:19:09.311390    5247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:09.311519    5247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:19:09.312630    5247 out.go:298] Setting JSON to false
	I0624 03:19:09.328749    5247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4719,"bootTime":1719219630,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:19:09.328820    5247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:19:09.333610    5247 out.go:177] * [addons-495000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:19:09.340522    5247 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:19:09.340590    5247 notify.go:220] Checking for updates...
	I0624 03:19:09.347498    5247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:19:09.350550    5247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:19:09.353568    5247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:19:09.356528    5247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:19:09.359590    5247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:19:09.362746    5247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:19:09.366583    5247 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:19:09.373587    5247 start.go:297] selected driver: qemu2
	I0624 03:19:09.373594    5247 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:19:09.373603    5247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:19:09.375963    5247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:19:09.379593    5247 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:19:09.383500    5247 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:19:09.383518    5247 cni.go:84] Creating CNI manager for ""
	I0624 03:19:09.383525    5247 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:19:09.383529    5247 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:19:09.383568    5247 start.go:340] cluster config:
	{Name:addons-495000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:19:09.388020    5247 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:19:09.395582    5247 out.go:177] * Starting "addons-495000" primary control-plane node in "addons-495000" cluster
	I0624 03:19:09.403614    5247 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:19:09.403636    5247 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:19:09.403649    5247 cache.go:56] Caching tarball of preloaded images
	I0624 03:19:09.403718    5247 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:19:09.403729    5247 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:19:09.403934    5247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/addons-495000/config.json ...
	I0624 03:19:09.403945    5247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/addons-495000/config.json: {Name:mke259e7aabff8b4e2c0b5a22d755aac75b10fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:19:09.404324    5247 start.go:360] acquireMachinesLock for addons-495000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:19:09.404395    5247 start.go:364] duration metric: took 63.958µs to acquireMachinesLock for "addons-495000"
	I0624 03:19:09.404408    5247 start.go:93] Provisioning new machine with config: &{Name:addons-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:19:09.404437    5247 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:19:09.412550    5247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0624 03:19:09.432306    5247 start.go:159] libmachine.API.Create for "addons-495000" (driver="qemu2")
	I0624 03:19:09.432332    5247 client.go:168] LocalClient.Create starting
	I0624 03:19:09.432451    5247 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:19:09.528940    5247 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:19:09.738282    5247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:19:09.961445    5247 main.go:141] libmachine: Creating SSH key...
	I0624 03:19:10.056652    5247 main.go:141] libmachine: Creating Disk image...
	I0624 03:19:10.056659    5247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:19:10.056866    5247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:10.069914    5247 main.go:141] libmachine: STDOUT: 
	I0624 03:19:10.069937    5247 main.go:141] libmachine: STDERR: 
	I0624 03:19:10.070000    5247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2 +20000M
	I0624 03:19:10.080769    5247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:19:10.080786    5247 main.go:141] libmachine: STDERR: 
	I0624 03:19:10.080798    5247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:10.080806    5247 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:19:10.080849    5247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:90:21:d3:28:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:10.082579    5247 main.go:141] libmachine: STDOUT: 
	I0624 03:19:10.082595    5247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:19:10.082621    5247 client.go:171] duration metric: took 650.287167ms to LocalClient.Create
	I0624 03:19:12.084795    5247 start.go:128] duration metric: took 2.680354375s to createHost
	I0624 03:19:12.084847    5247 start.go:83] releasing machines lock for "addons-495000", held for 2.680460458s
	W0624 03:19:12.084914    5247 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:12.094265    5247 out.go:177] * Deleting "addons-495000" in qemu2 ...
	W0624 03:19:12.130328    5247 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:12.130356    5247 start.go:728] Will try again in 5 seconds ...
	I0624 03:19:17.132478    5247 start.go:360] acquireMachinesLock for addons-495000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:19:17.132887    5247 start.go:364] duration metric: took 326.834µs to acquireMachinesLock for "addons-495000"
	I0624 03:19:17.133451    5247 start.go:93] Provisioning new machine with config: &{Name:addons-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:19:17.133758    5247 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:19:17.147380    5247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0624 03:19:17.197193    5247 start.go:159] libmachine.API.Create for "addons-495000" (driver="qemu2")
	I0624 03:19:17.197250    5247 client.go:168] LocalClient.Create starting
	I0624 03:19:17.197354    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:19:17.197417    5247 main.go:141] libmachine: Decoding PEM data...
	I0624 03:19:17.197438    5247 main.go:141] libmachine: Parsing certificate...
	I0624 03:19:17.197545    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:19:17.197594    5247 main.go:141] libmachine: Decoding PEM data...
	I0624 03:19:17.197607    5247 main.go:141] libmachine: Parsing certificate...
	I0624 03:19:17.198203    5247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:19:17.365787    5247 main.go:141] libmachine: Creating SSH key...
	I0624 03:19:17.468747    5247 main.go:141] libmachine: Creating Disk image...
	I0624 03:19:17.468761    5247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:19:17.468978    5247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:17.481461    5247 main.go:141] libmachine: STDOUT: 
	I0624 03:19:17.481489    5247 main.go:141] libmachine: STDERR: 
	I0624 03:19:17.481544    5247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2 +20000M
	I0624 03:19:17.492263    5247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:19:17.492287    5247 main.go:141] libmachine: STDERR: 
	I0624 03:19:17.492307    5247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:17.492314    5247 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:19:17.492351    5247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:4d:0c:c8:74:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/addons-495000/disk.qcow2
	I0624 03:19:17.494040    5247 main.go:141] libmachine: STDOUT: 
	I0624 03:19:17.494053    5247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:19:17.494067    5247 client.go:171] duration metric: took 296.814ms to LocalClient.Create
	I0624 03:19:19.496228    5247 start.go:128] duration metric: took 2.362438292s to createHost
	I0624 03:19:19.496312    5247 start.go:83] releasing machines lock for "addons-495000", held for 2.363386792s
	W0624 03:19:19.496690    5247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:19.507113    5247 out.go:177] 
	W0624 03:19:19.511143    5247 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:19:19.511170    5247 out.go:239] * 
	* 
	W0624 03:19:19.513893    5247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:19:19.523135    5247 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-495000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.26s)

                                                
                                    
x
+
TestCertOptions (12.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-362000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-362000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.773953834s)

                                                
                                                
-- stdout --
	* [cert-options-362000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-362000" primary control-plane node in "cert-options-362000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-362000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-362000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-362000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.988167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-362000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-362000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-362000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-362000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-362000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-362000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.544916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-362000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-362000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-362000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-362000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-362000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-06-24 03:42:01.867693 -0700 PDT m=+1396.343158835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-362000 -n cert-options-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-362000 -n cert-options-362000: exit status 7 (30.373959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-362000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-362000
--- FAIL: TestCertOptions (12.06s)

                                                
                                    
x
+
TestCertExpiration (198.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.863581042s)

                                                
                                                
-- stdout --
	* [cert-expiration-509000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-509000" primary control-plane node in "cert-expiration-509000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-509000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.3048625s)

                                                
                                                
-- stdout --
	* [cert-expiration-509000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-509000" primary control-plane node in "cert-expiration-509000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-509000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-509000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-509000" primary control-plane node in "cert-expiration-509000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-06-24 03:44:58.372007 -0700 PDT m=+1572.849012251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-509000 -n cert-expiration-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-509000 -n cert-expiration-509000: exit status 7 (61.46375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-509000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-509000
--- FAIL: TestCertExpiration (198.34s)

                                                
                                    
x
+
TestDockerFlags (10.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-180000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-180000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.089226167s)

                                                
                                                
-- stdout --
	* [docker-flags-180000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-180000" primary control-plane node in "docker-flags-180000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-180000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:41:29.866180    7298 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:41:29.866517    7298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:29.866522    7298 out.go:304] Setting ErrFile to fd 2...
	I0624 03:41:29.866525    7298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:29.866770    7298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:41:29.868093    7298 out.go:298] Setting JSON to false
	I0624 03:41:29.884498    7298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6059,"bootTime":1719219630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:41:29.884565    7298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:41:29.891504    7298 out.go:177] * [docker-flags-180000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:41:29.899390    7298 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:41:29.899446    7298 notify.go:220] Checking for updates...
	I0624 03:41:29.907335    7298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:41:29.908801    7298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:41:29.913329    7298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:41:29.916351    7298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:41:29.917878    7298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:41:29.921629    7298 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:41:29.921667    7298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:41:29.925311    7298 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:41:29.930304    7298 start.go:297] selected driver: qemu2
	I0624 03:41:29.930310    7298 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:41:29.930315    7298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:41:29.932727    7298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:41:29.936293    7298 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:41:29.937910    7298 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0624 03:41:29.937945    7298 cni.go:84] Creating CNI manager for ""
	I0624 03:41:29.937953    7298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:41:29.937957    7298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:41:29.937980    7298 start.go:340] cluster config:
	{Name:docker-flags-180000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:41:29.942586    7298 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:41:29.951333    7298 out.go:177] * Starting "docker-flags-180000" primary control-plane node in "docker-flags-180000" cluster
	I0624 03:41:29.955316    7298 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:41:29.955333    7298 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:41:29.955342    7298 cache.go:56] Caching tarball of preloaded images
	I0624 03:41:29.955403    7298 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:41:29.955410    7298 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:41:29.955502    7298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/docker-flags-180000/config.json ...
	I0624 03:41:29.955513    7298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/docker-flags-180000/config.json: {Name:mk2a18746c0371ad8654527e963fc93b1d532642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:41:29.955733    7298 start.go:360] acquireMachinesLock for docker-flags-180000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:29.955768    7298 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "docker-flags-180000"
	I0624 03:41:29.955779    7298 start.go:93] Provisioning new machine with config: &{Name:docker-flags-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:29.955809    7298 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:29.963351    7298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:29.980915    7298 start.go:159] libmachine.API.Create for "docker-flags-180000" (driver="qemu2")
	I0624 03:41:29.980943    7298 client.go:168] LocalClient.Create starting
	I0624 03:41:29.981007    7298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:29.981037    7298 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:29.981050    7298 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:29.981090    7298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:29.981114    7298 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:29.981119    7298 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:29.981537    7298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:30.123919    7298 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:30.420243    7298 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:30.420253    7298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:30.420490    7298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:30.433683    7298 main.go:141] libmachine: STDOUT: 
	I0624 03:41:30.433702    7298 main.go:141] libmachine: STDERR: 
	I0624 03:41:30.433768    7298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2 +20000M
	I0624 03:41:30.445002    7298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:30.445020    7298 main.go:141] libmachine: STDERR: 
	I0624 03:41:30.445036    7298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:30.445039    7298 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:30.445083    7298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:8a:47:92:b7:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:30.446751    7298 main.go:141] libmachine: STDOUT: 
	I0624 03:41:30.446766    7298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:30.446785    7298 client.go:171] duration metric: took 465.839667ms to LocalClient.Create
	I0624 03:41:32.448869    7298 start.go:128] duration metric: took 2.4930665s to createHost
	I0624 03:41:32.448885    7298 start.go:83] releasing machines lock for "docker-flags-180000", held for 2.493133791s
	W0624 03:41:32.448897    7298 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:32.457296    7298 out.go:177] * Deleting "docker-flags-180000" in qemu2 ...
	W0624 03:41:32.467595    7298 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:32.467606    7298 start.go:728] Will try again in 5 seconds ...
	I0624 03:41:37.469634    7298 start.go:360] acquireMachinesLock for docker-flags-180000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:37.469714    7298 start.go:364] duration metric: took 52.959µs to acquireMachinesLock for "docker-flags-180000"
	I0624 03:41:37.469735    7298 start.go:93] Provisioning new machine with config: &{Name:docker-flags-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:37.469790    7298 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:37.480972    7298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:37.496249    7298 start.go:159] libmachine.API.Create for "docker-flags-180000" (driver="qemu2")
	I0624 03:41:37.496273    7298 client.go:168] LocalClient.Create starting
	I0624 03:41:37.496334    7298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:37.496368    7298 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:37.496377    7298 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:37.496409    7298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:37.496431    7298 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:37.496438    7298 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:37.497380    7298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:37.683896    7298 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:37.858193    7298 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:37.858201    7298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:37.858416    7298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:37.871111    7298 main.go:141] libmachine: STDOUT: 
	I0624 03:41:37.871134    7298 main.go:141] libmachine: STDERR: 
	I0624 03:41:37.871196    7298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2 +20000M
	I0624 03:41:37.881880    7298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:37.881896    7298 main.go:141] libmachine: STDERR: 
	I0624 03:41:37.881909    7298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:37.881914    7298 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:37.881958    7298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f8:a5:db:9d:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/docker-flags-180000/disk.qcow2
	I0624 03:41:37.883584    7298 main.go:141] libmachine: STDOUT: 
	I0624 03:41:37.883599    7298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:37.883612    7298 client.go:171] duration metric: took 387.338084ms to LocalClient.Create
	I0624 03:41:39.885890    7298 start.go:128] duration metric: took 2.416048583s to createHost
	I0624 03:41:39.885976    7298 start.go:83] releasing machines lock for "docker-flags-180000", held for 2.416268833s
	W0624 03:41:39.886376    7298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-180000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-180000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:39.901621    7298 out.go:177] 
	W0624 03:41:39.904717    7298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:41:39.904747    7298 out.go:239] * 
	* 
	W0624 03:41:39.907348    7298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:41:39.916629    7298 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-180000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-180000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-180000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.22275ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-180000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-180000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-180000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-180000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-180000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-180000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-180000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-180000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-180000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (57.734292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-180000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-180000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-180000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-180000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-180000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-180000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-06-24 03:41:40.060713 -0700 PDT m=+1374.535988585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-180000 -n docker-flags-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-180000 -n docker-flags-180000: exit status 7 (32.999583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-180000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-180000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-180000
--- FAIL: TestDockerFlags (10.36s)

                                                
                                    
x
+
TestForceSystemdFlag (12.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-811000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-811000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.285507208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-811000" primary control-plane node in "force-systemd-flag-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:41:37.442821    7330 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:41:37.442967    7330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:37.442970    7330 out.go:304] Setting ErrFile to fd 2...
	I0624 03:41:37.442973    7330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:37.443118    7330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:41:37.444109    7330 out.go:298] Setting JSON to false
	I0624 03:41:37.460182    7330 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6067,"bootTime":1719219630,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:41:37.460247    7330 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:41:37.465106    7330 out.go:177] * [force-systemd-flag-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:41:37.472206    7330 notify.go:220] Checking for updates...
	I0624 03:41:37.480977    7330 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:41:37.485090    7330 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:41:37.488990    7330 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:41:37.493062    7330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:41:37.501055    7330 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:41:37.508047    7330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:41:37.512468    7330 config.go:182] Loaded profile config "docker-flags-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:41:37.512537    7330 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:41:37.512589    7330 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:41:37.516863    7330 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:41:37.525028    7330 start.go:297] selected driver: qemu2
	I0624 03:41:37.525034    7330 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:41:37.525041    7330 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:41:37.527549    7330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:41:37.532107    7330 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:41:37.535234    7330 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:41:37.535254    7330 cni.go:84] Creating CNI manager for ""
	I0624 03:41:37.535268    7330 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:41:37.535275    7330 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:41:37.535317    7330 start.go:340] cluster config:
	{Name:force-systemd-flag-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:41:37.540437    7330 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:41:37.545093    7330 out.go:177] * Starting "force-systemd-flag-811000" primary control-plane node in "force-systemd-flag-811000" cluster
	I0624 03:41:37.553078    7330 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:41:37.553098    7330 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:41:37.553111    7330 cache.go:56] Caching tarball of preloaded images
	I0624 03:41:37.553196    7330 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:41:37.553202    7330 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:41:37.553283    7330 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/force-systemd-flag-811000/config.json ...
	I0624 03:41:37.553296    7330 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/force-systemd-flag-811000/config.json: {Name:mka4a2bdf40cac52eaae87a54c2a1cb6d653cac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:41:37.553656    7330 start.go:360] acquireMachinesLock for force-systemd-flag-811000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:39.886111    7330 start.go:364] duration metric: took 2.332424209s to acquireMachinesLock for "force-systemd-flag-811000"
	I0624 03:41:39.886311    7330 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:39.886505    7330 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:39.896633    7330 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:39.946807    7330 start.go:159] libmachine.API.Create for "force-systemd-flag-811000" (driver="qemu2")
	I0624 03:41:39.946862    7330 client.go:168] LocalClient.Create starting
	I0624 03:41:39.946949    7330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:39.947012    7330 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:39.947030    7330 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:39.947099    7330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:39.947143    7330 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:39.947162    7330 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:39.947772    7330 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:40.149167    7330 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:40.191354    7330 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:40.191361    7330 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:40.191528    7330 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:40.206607    7330 main.go:141] libmachine: STDOUT: 
	I0624 03:41:40.206630    7330 main.go:141] libmachine: STDERR: 
	I0624 03:41:40.206704    7330 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2 +20000M
	I0624 03:41:40.219353    7330 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:40.219380    7330 main.go:141] libmachine: STDERR: 
	I0624 03:41:40.219394    7330 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:40.219400    7330 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:40.219433    7330 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:99:cc:51:a7:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:40.221803    7330 main.go:141] libmachine: STDOUT: 
	I0624 03:41:40.221825    7330 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:40.221850    7330 client.go:171] duration metric: took 274.983834ms to LocalClient.Create
	I0624 03:41:42.224040    7330 start.go:128] duration metric: took 2.33752175s to createHost
	I0624 03:41:42.224093    7330 start.go:83] releasing machines lock for "force-systemd-flag-811000", held for 2.337961791s
	W0624 03:41:42.224153    7330 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:42.242959    7330 out.go:177] * Deleting "force-systemd-flag-811000" in qemu2 ...
	W0624 03:41:42.266142    7330 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:42.266164    7330 start.go:728] Will try again in 5 seconds ...
	I0624 03:41:47.268295    7330 start.go:360] acquireMachinesLock for force-systemd-flag-811000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:47.268724    7330 start.go:364] duration metric: took 348.75µs to acquireMachinesLock for "force-systemd-flag-811000"
	I0624 03:41:47.268853    7330 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:47.269142    7330 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:47.273913    7330 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:47.325335    7330 start.go:159] libmachine.API.Create for "force-systemd-flag-811000" (driver="qemu2")
	I0624 03:41:47.325387    7330 client.go:168] LocalClient.Create starting
	I0624 03:41:47.325500    7330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:47.325563    7330 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:47.325581    7330 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:47.325641    7330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:47.325689    7330 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:47.325699    7330 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:47.326226    7330 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:47.497573    7330 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:47.631348    7330 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:47.631357    7330 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:47.631590    7330 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:47.644516    7330 main.go:141] libmachine: STDOUT: 
	I0624 03:41:47.644537    7330 main.go:141] libmachine: STDERR: 
	I0624 03:41:47.644582    7330 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2 +20000M
	I0624 03:41:47.655549    7330 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:47.655567    7330 main.go:141] libmachine: STDERR: 
	I0624 03:41:47.655587    7330 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:47.655592    7330 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:47.655620    7330 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:2b:b7:e7:2b:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-flag-811000/disk.qcow2
	I0624 03:41:47.657365    7330 main.go:141] libmachine: STDOUT: 
	I0624 03:41:47.657378    7330 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:47.657396    7330 client.go:171] duration metric: took 332.005334ms to LocalClient.Create
	I0624 03:41:49.659534    7330 start.go:128] duration metric: took 2.390384s to createHost
	I0624 03:41:49.659573    7330 start.go:83] releasing machines lock for "force-systemd-flag-811000", held for 2.390846875s
	W0624 03:41:49.659858    7330 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:49.679472    7330 out.go:177] 
	W0624 03:41:49.684575    7330 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:41:49.684596    7330 out.go:239] * 
	* 
	W0624 03:41:49.686604    7330 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:41:49.695421    7330 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-811000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-811000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-811000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (70.121208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-811000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-811000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-811000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-06-24 03:41:49.773561 -0700 PDT m=+1384.248921126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-811000 -n force-systemd-flag-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-811000 -n force-systemd-flag-811000: exit status 7 (35.175166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-811000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-811000
--- FAIL: TestForceSystemdFlag (12.52s)

                                                
                                    
x
+
TestForceSystemdEnv (10.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.871986541s)

                                                
                                                
-- stdout --
	* [force-systemd-env-043000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-043000" primary control-plane node in "force-systemd-env-043000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-043000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:41:19.774678    7152 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:41:19.774795    7152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:19.774798    7152 out.go:304] Setting ErrFile to fd 2...
	I0624 03:41:19.774808    7152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:41:19.774929    7152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:41:19.776017    7152 out.go:298] Setting JSON to false
	I0624 03:41:19.792383    7152 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6049,"bootTime":1719219630,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:41:19.792451    7152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:41:19.799184    7152 out.go:177] * [force-systemd-env-043000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:41:19.807100    7152 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:41:19.807148    7152 notify.go:220] Checking for updates...
	I0624 03:41:19.814037    7152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:41:19.817102    7152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:41:19.820932    7152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:41:19.824081    7152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:41:19.827143    7152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0624 03:41:19.830398    7152 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:41:19.830463    7152 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:41:19.830527    7152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:41:19.834048    7152 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:41:19.841081    7152 start.go:297] selected driver: qemu2
	I0624 03:41:19.841085    7152 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:41:19.841091    7152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:41:19.843538    7152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:41:19.846068    7152 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:41:19.849156    7152 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:41:19.849199    7152 cni.go:84] Creating CNI manager for ""
	I0624 03:41:19.849206    7152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:41:19.849214    7152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:41:19.849243    7152 start.go:340] cluster config:
	{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:41:19.853754    7152 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:41:19.862097    7152 out.go:177] * Starting "force-systemd-env-043000" primary control-plane node in "force-systemd-env-043000" cluster
	I0624 03:41:19.866028    7152 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:41:19.866046    7152 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:41:19.866053    7152 cache.go:56] Caching tarball of preloaded images
	I0624 03:41:19.866109    7152 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:41:19.866115    7152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:41:19.866163    7152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/force-systemd-env-043000/config.json ...
	I0624 03:41:19.866174    7152 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/force-systemd-env-043000/config.json: {Name:mkd128a912a2a909068e47f26ce975a7e80ba8db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:41:19.866382    7152 start.go:360] acquireMachinesLock for force-systemd-env-043000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:19.866417    7152 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "force-systemd-env-043000"
	I0624 03:41:19.866428    7152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:19.866452    7152 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:19.874101    7152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:19.890588    7152 start.go:159] libmachine.API.Create for "force-systemd-env-043000" (driver="qemu2")
	I0624 03:41:19.890618    7152 client.go:168] LocalClient.Create starting
	I0624 03:41:19.890674    7152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:19.890703    7152 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:19.890715    7152 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:19.890752    7152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:19.890774    7152 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:19.890782    7152 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:19.891174    7152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:20.034277    7152 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:20.156700    7152 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:20.156708    7152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:20.156940    7152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:20.169672    7152 main.go:141] libmachine: STDOUT: 
	I0624 03:41:20.169692    7152 main.go:141] libmachine: STDERR: 
	I0624 03:41:20.169752    7152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2 +20000M
	I0624 03:41:20.180902    7152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:20.180921    7152 main.go:141] libmachine: STDERR: 
	I0624 03:41:20.180936    7152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:20.180941    7152 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:20.180973    7152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:88:b4:46:ae:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:20.182679    7152 main.go:141] libmachine: STDOUT: 
	I0624 03:41:20.182693    7152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:20.182714    7152 client.go:171] duration metric: took 292.0915ms to LocalClient.Create
	I0624 03:41:22.184761    7152 start.go:128] duration metric: took 2.318322084s to createHost
	I0624 03:41:22.184779    7152 start.go:83] releasing machines lock for "force-systemd-env-043000", held for 2.318377083s
	W0624 03:41:22.184791    7152 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:22.191868    7152 out.go:177] * Deleting "force-systemd-env-043000" in qemu2 ...
	W0624 03:41:22.201748    7152 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:22.201768    7152 start.go:728] Will try again in 5 seconds ...
	I0624 03:41:27.203799    7152 start.go:360] acquireMachinesLock for force-systemd-env-043000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:41:27.203911    7152 start.go:364] duration metric: took 86.333µs to acquireMachinesLock for "force-systemd-env-043000"
	I0624 03:41:27.203938    7152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:41:27.203995    7152 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:41:27.220924    7152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0624 03:41:27.236295    7152 start.go:159] libmachine.API.Create for "force-systemd-env-043000" (driver="qemu2")
	I0624 03:41:27.236331    7152 client.go:168] LocalClient.Create starting
	I0624 03:41:27.236401    7152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:41:27.236442    7152 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:27.236450    7152 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:27.236482    7152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:41:27.236505    7152 main.go:141] libmachine: Decoding PEM data...
	I0624 03:41:27.236516    7152 main.go:141] libmachine: Parsing certificate...
	I0624 03:41:27.236825    7152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:41:27.374409    7152 main.go:141] libmachine: Creating SSH key...
	I0624 03:41:27.549525    7152 main.go:141] libmachine: Creating Disk image...
	I0624 03:41:27.549536    7152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:41:27.549776    7152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:27.562981    7152 main.go:141] libmachine: STDOUT: 
	I0624 03:41:27.563002    7152 main.go:141] libmachine: STDERR: 
	I0624 03:41:27.563055    7152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2 +20000M
	I0624 03:41:27.574612    7152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:41:27.574635    7152 main.go:141] libmachine: STDERR: 
	I0624 03:41:27.574653    7152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:27.574658    7152 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:41:27.574693    7152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e9:2d:dd:52:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I0624 03:41:27.576592    7152 main.go:141] libmachine: STDOUT: 
	I0624 03:41:27.576620    7152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:41:27.576635    7152 client.go:171] duration metric: took 340.30275ms to LocalClient.Create
	I0624 03:41:29.578823    7152 start.go:128] duration metric: took 2.374815208s to createHost
	I0624 03:41:29.578891    7152 start.go:83] releasing machines lock for "force-systemd-env-043000", held for 2.374988834s
	W0624 03:41:29.579323    7152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:41:29.588037    7152 out.go:177] 
	W0624 03:41:29.592075    7152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:41:29.592112    7152 out.go:239] * 
	* 
	W0624 03:41:29.594848    7152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:41:29.604004    7152 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.404792ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-043000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-043000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-06-24 03:41:29.699288 -0700 PDT m=+1364.174472710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-043000 -n force-systemd-env-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-043000 -n force-systemd-env-043000: exit status 7 (34.017166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-043000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-043000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-043000
--- FAIL: TestForceSystemdEnv (10.09s)

                                                
                                    
x
+
TestErrorSpam/setup (9.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-659000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-659000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 --driver=qemu2 : exit status 80 (9.826842917s)

                                                
                                                
-- stdout --
	* [nospam-659000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-659000" primary control-plane node in "nospam-659000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-659000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-659000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-659000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-659000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19124
- KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-659000" primary control-plane node in "nospam-659000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-659000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.83s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-880000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.855299708s)

                                                
                                                
-- stdout --
	* [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-880000" primary control-plane node in "functional-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-880000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19124
- KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-880000" primary control-plane node in "functional-880000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-880000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50934 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (73.68025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-880000 --alsologtostderr -v=8: exit status 80 (5.182520417s)

                                                
                                                
-- stdout --
	* [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-880000" primary control-plane node in "functional-880000" cluster
	* Restarting existing qemu2 VM for "functional-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:19:47.453825    5381 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:19:47.454191    5381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:47.454195    5381 out.go:304] Setting ErrFile to fd 2...
	I0624 03:19:47.454198    5381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:47.454385    5381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:19:47.455676    5381 out.go:298] Setting JSON to false
	I0624 03:19:47.471761    5381 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4757,"bootTime":1719219630,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:19:47.471824    5381 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:19:47.476106    5381 out.go:177] * [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:19:47.483072    5381 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:19:47.483134    5381 notify.go:220] Checking for updates...
	I0624 03:19:47.490017    5381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:19:47.494069    5381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:19:47.497000    5381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:19:47.500086    5381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:19:47.503055    5381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:19:47.506219    5381 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:19:47.506276    5381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:19:47.511030    5381 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:19:47.518003    5381 start.go:297] selected driver: qemu2
	I0624 03:19:47.518016    5381 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:19:47.518066    5381 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:19:47.520302    5381 cni.go:84] Creating CNI manager for ""
	I0624 03:19:47.520316    5381 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:19:47.520365    5381 start.go:340] cluster config:
	{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:19:47.524665    5381 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:19:47.531901    5381 out.go:177] * Starting "functional-880000" primary control-plane node in "functional-880000" cluster
	I0624 03:19:47.536114    5381 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:19:47.536129    5381 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:19:47.536136    5381 cache.go:56] Caching tarball of preloaded images
	I0624 03:19:47.536213    5381 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:19:47.536219    5381 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:19:47.536277    5381 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/functional-880000/config.json ...
	I0624 03:19:47.536716    5381 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:19:47.536743    5381 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "functional-880000"
	I0624 03:19:47.536751    5381 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:19:47.536758    5381 fix.go:54] fixHost starting: 
	I0624 03:19:47.536868    5381 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
	W0624 03:19:47.536879    5381 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:19:47.540000    5381 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
	I0624 03:19:47.548041    5381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
	I0624 03:19:47.549963    5381 main.go:141] libmachine: STDOUT: 
	I0624 03:19:47.549985    5381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:19:47.550015    5381 fix.go:56] duration metric: took 13.258417ms for fixHost
	I0624 03:19:47.550019    5381 start.go:83] releasing machines lock for "functional-880000", held for 13.2725ms
	W0624 03:19:47.550026    5381 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:19:47.550061    5381 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:47.550066    5381 start.go:728] Will try again in 5 seconds ...
	I0624 03:19:52.551457    5381 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:19:52.551857    5381 start.go:364] duration metric: took 307.208µs to acquireMachinesLock for "functional-880000"
	I0624 03:19:52.551969    5381 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:19:52.551990    5381 fix.go:54] fixHost starting: 
	I0624 03:19:52.552693    5381 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
	W0624 03:19:52.552718    5381 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:19:52.560075    5381 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
	I0624 03:19:52.564335    5381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
	I0624 03:19:52.573004    5381 main.go:141] libmachine: STDOUT: 
	I0624 03:19:52.573072    5381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:19:52.573137    5381 fix.go:56] duration metric: took 21.150917ms for fixHost
	I0624 03:19:52.573153    5381 start.go:83] releasing machines lock for "functional-880000", held for 21.270667ms
	W0624 03:19:52.573286    5381 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:52.580142    5381 out.go:177] 
	W0624 03:19:52.583064    5381 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:19:52.583119    5381 out.go:239] * 
	* 
	W0624 03:19:52.585566    5381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:19:52.593041    5381 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-880000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.184415167s for "functional-880000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (67.097416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.648167ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-880000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (30.391834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-880000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-880000 get po -A: exit status 1 (26.268125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-880000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-880000\n"*: args "kubectl --context functional-880000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-880000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (29.075417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl images: exit status 83 (39.926917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.214166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-880000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.385042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.835583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-880000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 kubectl -- --context functional-880000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 kubectl -- --context functional-880000 get pods: exit status 1 (602.639916ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-880000
	* no server found for cluster "functional-880000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-880000 kubectl -- --context functional-880000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (31.789333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-880000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-880000 get pods: exit status 1 (926.701125ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-880000
	* no server found for cluster "functional-880000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-880000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (29.209833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-880000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.176688083s)

                                                
                                                
-- stdout --
	* [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-880000" primary control-plane node in "functional-880000" cluster
	* Restarting existing qemu2 VM for "functional-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-880000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.177241084s for "functional-880000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (67.954791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-880000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-880000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.947ms)

                                                
                                                
** stderr ** 
	error: context "functional-880000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-880000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (30.638834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 logs: exit status 83 (75.798125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
	|         | -p download-only-954000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
	| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
	| start   | -o=json --download-only                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
	|         | -p download-only-105000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| start   | --download-only -p                                                       | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | binary-mirror-588000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50905                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-588000                                                  | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| addons  | enable dashboard -p                                                      | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | addons-495000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | addons-495000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-495000 --wait=true                                             | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-495000                                                         | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| start   | -p nospam-659000 -n=1 --memory=2250 --wait=false                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-659000                                                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
	| cache   | functional-880000 cache delete                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| ssh     | functional-880000 ssh sudo                                               | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-880000                                                        | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-880000 cache reload                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-880000 kubectl --                                             | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --context functional-880000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:19:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:19:59.036946    5460 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:19:59.037070    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:59.037072    5460 out.go:304] Setting ErrFile to fd 2...
	I0624 03:19:59.037074    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:19:59.037178    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:19:59.038370    5460 out.go:298] Setting JSON to false
	I0624 03:19:59.054253    5460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4769,"bootTime":1719219630,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:19:59.054311    5460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:19:59.058049    5460 out.go:177] * [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:19:59.065958    5460 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:19:59.065996    5460 notify.go:220] Checking for updates...
	I0624 03:19:59.073107    5460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:19:59.077078    5460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:19:59.080111    5460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:19:59.083106    5460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:19:59.085969    5460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:19:59.089281    5460 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:19:59.089337    5460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:19:59.094036    5460 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:19:59.101086    5460 start.go:297] selected driver: qemu2
	I0624 03:19:59.101089    5460 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:19:59.101129    5460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:19:59.103453    5460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:19:59.103492    5460 cni.go:84] Creating CNI manager for ""
	I0624 03:19:59.103498    5460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:19:59.103539    5460 start.go:340] cluster config:
	{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:19:59.107962    5460 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:19:59.115015    5460 out.go:177] * Starting "functional-880000" primary control-plane node in "functional-880000" cluster
	I0624 03:19:59.119077    5460 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:19:59.119091    5460 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:19:59.119097    5460 cache.go:56] Caching tarball of preloaded images
	I0624 03:19:59.119162    5460 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:19:59.119167    5460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:19:59.119223    5460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/functional-880000/config.json ...
	I0624 03:19:59.119664    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:19:59.119696    5460 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "functional-880000"
	I0624 03:19:59.119703    5460 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:19:59.119708    5460 fix.go:54] fixHost starting: 
	I0624 03:19:59.119815    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
	W0624 03:19:59.119821    5460 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:19:59.127062    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
	I0624 03:19:59.131067    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
	I0624 03:19:59.133009    5460 main.go:141] libmachine: STDOUT: 
	I0624 03:19:59.133023    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:19:59.133053    5460 fix.go:56] duration metric: took 13.346042ms for fixHost
	I0624 03:19:59.133056    5460 start.go:83] releasing machines lock for "functional-880000", held for 13.357ms
	W0624 03:19:59.133061    5460 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:19:59.133092    5460 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:19:59.133096    5460 start.go:728] Will try again in 5 seconds ...
	I0624 03:20:04.133722    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:20:04.134088    5460 start.go:364] duration metric: took 298.084µs to acquireMachinesLock for "functional-880000"
	I0624 03:20:04.134198    5460 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:20:04.134207    5460 fix.go:54] fixHost starting: 
	I0624 03:20:04.134920    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
	W0624 03:20:04.134940    5460 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:20:04.139339    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
	I0624 03:20:04.143524    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
	I0624 03:20:04.152331    5460 main.go:141] libmachine: STDOUT: 
	I0624 03:20:04.152374    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:20:04.152455    5460 fix.go:56] duration metric: took 18.248792ms for fixHost
	I0624 03:20:04.152466    5460 start.go:83] releasing machines lock for "functional-880000", held for 18.363417ms
	W0624 03:20:04.152619    5460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:20:04.160297    5460 out.go:177] 
	W0624 03:20:04.163314    5460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:20:04.163336    5460 out.go:239] * 
	W0624 03:20:04.165865    5460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:20:04.173284    5460 out.go:177] 
	
	
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-880000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
|         | -p download-only-954000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
| start   | -o=json --download-only                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
|         | -p download-only-105000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | binary-mirror-588000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50905                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-588000                                                  | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | addons-495000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | addons-495000                                                            |                      |         |         |                     |                     |
| start   | -p addons-495000 --wait=true                                             | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-495000                                                         | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | -p nospam-659000 -n=1 --memory=2250 --wait=false                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-659000                                                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
| cache   | functional-880000 cache delete                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| ssh     | functional-880000 ssh sudo                                               | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-880000                                                        | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-880000 cache reload                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-880000 kubectl --                                             | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --context functional-880000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/24 03:19:59
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0624 03:19:59.036946    5460 out.go:291] Setting OutFile to fd 1 ...
I0624 03:19:59.037070    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:19:59.037072    5460 out.go:304] Setting ErrFile to fd 2...
I0624 03:19:59.037074    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:19:59.037178    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:19:59.038370    5460 out.go:298] Setting JSON to false
I0624 03:19:59.054253    5460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4769,"bootTime":1719219630,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0624 03:19:59.054311    5460 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0624 03:19:59.058049    5460 out.go:177] * [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0624 03:19:59.065958    5460 out.go:177]   - MINIKUBE_LOCATION=19124
I0624 03:19:59.065996    5460 notify.go:220] Checking for updates...
I0624 03:19:59.073107    5460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
I0624 03:19:59.077078    5460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0624 03:19:59.080111    5460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0624 03:19:59.083106    5460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
I0624 03:19:59.085969    5460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0624 03:19:59.089281    5460 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:19:59.089337    5460 driver.go:392] Setting default libvirt URI to qemu:///system
I0624 03:19:59.094036    5460 out.go:177] * Using the qemu2 driver based on existing profile
I0624 03:19:59.101086    5460 start.go:297] selected driver: qemu2
I0624 03:19:59.101089    5460 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 03:19:59.101129    5460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0624 03:19:59.103453    5460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0624 03:19:59.103492    5460 cni.go:84] Creating CNI manager for ""
I0624 03:19:59.103498    5460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0624 03:19:59.103539    5460 start.go:340] cluster config:
{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 03:19:59.107962    5460 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0624 03:19:59.115015    5460 out.go:177] * Starting "functional-880000" primary control-plane node in "functional-880000" cluster
I0624 03:19:59.119077    5460 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0624 03:19:59.119091    5460 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0624 03:19:59.119097    5460 cache.go:56] Caching tarball of preloaded images
I0624 03:19:59.119162    5460 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0624 03:19:59.119167    5460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0624 03:19:59.119223    5460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/functional-880000/config.json ...
I0624 03:19:59.119664    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0624 03:19:59.119696    5460 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "functional-880000"
I0624 03:19:59.119703    5460 start.go:96] Skipping create...Using existing machine configuration
I0624 03:19:59.119708    5460 fix.go:54] fixHost starting: 
I0624 03:19:59.119815    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
W0624 03:19:59.119821    5460 fix.go:138] unexpected machine state, will restart: <nil>
I0624 03:19:59.127062    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
I0624 03:19:59.131067    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
I0624 03:19:59.133009    5460 main.go:141] libmachine: STDOUT: 
I0624 03:19:59.133023    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0624 03:19:59.133053    5460 fix.go:56] duration metric: took 13.346042ms for fixHost
I0624 03:19:59.133056    5460 start.go:83] releasing machines lock for "functional-880000", held for 13.357ms
W0624 03:19:59.133061    5460 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0624 03:19:59.133092    5460 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0624 03:19:59.133096    5460 start.go:728] Will try again in 5 seconds ...
I0624 03:20:04.133722    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0624 03:20:04.134088    5460 start.go:364] duration metric: took 298.084µs to acquireMachinesLock for "functional-880000"
I0624 03:20:04.134198    5460 start.go:96] Skipping create...Using existing machine configuration
I0624 03:20:04.134207    5460 fix.go:54] fixHost starting: 
I0624 03:20:04.134920    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
W0624 03:20:04.134940    5460 fix.go:138] unexpected machine state, will restart: <nil>
I0624 03:20:04.139339    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
I0624 03:20:04.143524    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
I0624 03:20:04.152331    5460 main.go:141] libmachine: STDOUT: 
I0624 03:20:04.152374    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0624 03:20:04.152455    5460 fix.go:56] duration metric: took 18.248792ms for fixHost
I0624 03:20:04.152466    5460 start.go:83] releasing machines lock for "functional-880000", held for 18.363417ms
W0624 03:20:04.152619    5460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0624 03:20:04.160297    5460 out.go:177] 
W0624 03:20:04.163314    5460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0624 03:20:04.163336    5460 out.go:239] * 
W0624 03:20:04.165865    5460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0624 03:20:04.173284    5460 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd164057516/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
|         | -p download-only-954000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
| start   | -o=json --download-only                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
|         | -p download-only-105000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-954000                                                  | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| delete  | -p download-only-105000                                                  | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | binary-mirror-588000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50905                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-588000                                                  | binary-mirror-588000 | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | addons-495000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | addons-495000                                                            |                      |         |         |                     |                     |
| start   | -p addons-495000 --wait=true                                             | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-495000                                                         | addons-495000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | -p nospam-659000 -n=1 --memory=2250 --wait=false                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-659000 --log_dir                                                  | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-659000                                                         | nospam-659000        | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-880000 cache add                                              | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
| cache   | functional-880000 cache delete                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | minikube-local-cache-test:functional-880000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| ssh     | functional-880000 ssh sudo                                               | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-880000                                                        | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-880000 cache reload                                           | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
| ssh     | functional-880000 ssh                                                    | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT | 24 Jun 24 03:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-880000 kubectl --                                             | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --context functional-880000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-880000                                                     | functional-880000    | jenkins | v1.33.1 | 24 Jun 24 03:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/24 03:19:59
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0624 03:19:59.036946    5460 out.go:291] Setting OutFile to fd 1 ...
I0624 03:19:59.037070    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:19:59.037072    5460 out.go:304] Setting ErrFile to fd 2...
I0624 03:19:59.037074    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:19:59.037178    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:19:59.038370    5460 out.go:298] Setting JSON to false
I0624 03:19:59.054253    5460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4769,"bootTime":1719219630,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0624 03:19:59.054311    5460 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0624 03:19:59.058049    5460 out.go:177] * [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0624 03:19:59.065958    5460 out.go:177]   - MINIKUBE_LOCATION=19124
I0624 03:19:59.065996    5460 notify.go:220] Checking for updates...
I0624 03:19:59.073107    5460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
I0624 03:19:59.077078    5460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0624 03:19:59.080111    5460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0624 03:19:59.083106    5460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
I0624 03:19:59.085969    5460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0624 03:19:59.089281    5460 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:19:59.089337    5460 driver.go:392] Setting default libvirt URI to qemu:///system
I0624 03:19:59.094036    5460 out.go:177] * Using the qemu2 driver based on existing profile
I0624 03:19:59.101086    5460 start.go:297] selected driver: qemu2
I0624 03:19:59.101089    5460 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 03:19:59.101129    5460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0624 03:19:59.103453    5460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0624 03:19:59.103492    5460 cni.go:84] Creating CNI manager for ""
I0624 03:19:59.103498    5460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0624 03:19:59.103539    5460 start.go:340] cluster config:
{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 03:19:59.107962    5460 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0624 03:19:59.115015    5460 out.go:177] * Starting "functional-880000" primary control-plane node in "functional-880000" cluster
I0624 03:19:59.119077    5460 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0624 03:19:59.119091    5460 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0624 03:19:59.119097    5460 cache.go:56] Caching tarball of preloaded images
I0624 03:19:59.119162    5460 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0624 03:19:59.119167    5460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0624 03:19:59.119223    5460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/functional-880000/config.json ...
I0624 03:19:59.119664    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0624 03:19:59.119696    5460 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "functional-880000"
I0624 03:19:59.119703    5460 start.go:96] Skipping create...Using existing machine configuration
I0624 03:19:59.119708    5460 fix.go:54] fixHost starting: 
I0624 03:19:59.119815    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
W0624 03:19:59.119821    5460 fix.go:138] unexpected machine state, will restart: <nil>
I0624 03:19:59.127062    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
I0624 03:19:59.131067    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
I0624 03:19:59.133009    5460 main.go:141] libmachine: STDOUT: 
I0624 03:19:59.133023    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0624 03:19:59.133053    5460 fix.go:56] duration metric: took 13.346042ms for fixHost
I0624 03:19:59.133056    5460 start.go:83] releasing machines lock for "functional-880000", held for 13.357ms
W0624 03:19:59.133061    5460 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0624 03:19:59.133092    5460 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0624 03:19:59.133096    5460 start.go:728] Will try again in 5 seconds ...
I0624 03:20:04.133722    5460 start.go:360] acquireMachinesLock for functional-880000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0624 03:20:04.134088    5460 start.go:364] duration metric: took 298.084µs to acquireMachinesLock for "functional-880000"
I0624 03:20:04.134198    5460 start.go:96] Skipping create...Using existing machine configuration
I0624 03:20:04.134207    5460 fix.go:54] fixHost starting: 
I0624 03:20:04.134920    5460 fix.go:112] recreateIfNeeded on functional-880000: state=Stopped err=<nil>
W0624 03:20:04.134940    5460 fix.go:138] unexpected machine state, will restart: <nil>
I0624 03:20:04.139339    5460 out.go:177] * Restarting existing qemu2 VM for "functional-880000" ...
I0624 03:20:04.143524    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:55:50:a7:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/functional-880000/disk.qcow2
I0624 03:20:04.152331    5460 main.go:141] libmachine: STDOUT: 
I0624 03:20:04.152374    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0624 03:20:04.152455    5460 fix.go:56] duration metric: took 18.248792ms for fixHost
I0624 03:20:04.152466    5460 start.go:83] releasing machines lock for "functional-880000", held for 18.363417ms
W0624 03:20:04.152619    5460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0624 03:20:04.160297    5460 out.go:177] 
W0624 03:20:04.163314    5460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0624 03:20:04.163336    5460 out.go:239] * 
W0624 03:20:04.165865    5460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0624 03:20:04.173284    5460 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-880000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-880000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.864333ms)

                                                
                                                
** stderr ** 
	error: context "functional-880000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-880000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-880000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-880000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-880000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-880000 --alsologtostderr -v=1] stderr:
I0624 03:20:50.697482    5795 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:50.698052    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:50.698055    5795 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:50.698063    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:50.698230    5795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:50.698443    5795 mustload.go:65] Loading cluster: functional-880000
I0624 03:20:50.698635    5795 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:50.703229    5795 out.go:177] * The control-plane node functional-880000 host is not running: state=Stopped
I0624 03:20:50.707036    5795 out.go:177]   To start a cluster, run: "minikube start -p functional-880000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (41.890083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 status: exit status 7 (29.452166ms)

                                                
                                                
-- stdout --
	functional-880000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-880000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.323708ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-880000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 status -o json: exit status 7 (30.047917ms)

                                                
                                                
-- stdout --
	{"Name":"functional-880000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-880000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (30.697584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-880000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-880000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.423209ms)

                                                
                                                
** stderr ** 
	error: context "functional-880000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-880000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-880000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-880000 describe po hello-node-connect: exit status 1 (26.325292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-880000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-880000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-880000 logs -l app=hello-node-connect: exit status 1 (27.477416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-880000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-880000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-880000 describe svc hello-node-connect: exit status 1 (26.030208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-880000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (30.480416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-880000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (29.444583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "echo hello": exit status 83 (39.827416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n"*. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "cat /etc/hostname": exit status 83 (45.937083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-880000"- but got *"* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n"*. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (29.984166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.399792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.930791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-880000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-880000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cp functional-880000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2031202953/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 cp functional-880000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2031202953/001/cp-test.txt: exit status 83 (41.61325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 cp functional-880000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2031202953/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.937417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2031202953/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.819084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.875958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-880000 ssh -n functional-880000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-880000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-880000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5136/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/test/nested/copy/5136/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/test/nested/copy/5136/hosts": exit status 83 (43.46475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/test/nested/copy/5136/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-880000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-880000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (30.3335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/5136.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/5136.pem": exit status 83 (41.097417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/5136.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /etc/ssl/certs/5136.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5136.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /usr/share/ca-certificates/5136.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /usr/share/ca-certificates/5136.pem": exit status 83 (40.792292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/5136.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /usr/share/ca-certificates/5136.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5136.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.665292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/51362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/51362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/51362.pem": exit status 83 (39.607375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/51362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /etc/ssl/certs/51362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/51362.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/51362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /usr/share/ca-certificates/51362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /usr/share/ca-certificates/51362.pem": exit status 83 (48.449917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/51362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /usr/share/ca-certificates/51362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/51362.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.772041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-880000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-880000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (29.865583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-880000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-880000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.576541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-880000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-880000 -n functional-880000: exit status 7 (31.158333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo systemctl is-active crio": exit status 83 (41.493167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 version -o=json --components: exit status 83 (39.905125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-880000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-880000 image ls --format short --alsologtostderr:
I0624 03:20:51.133238    5812 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:51.133392    5812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.133395    5812 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:51.133397    5812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.133531    5812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:51.133938    5812 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.134002    5812 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-880000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-880000 image ls --format table --alsologtostderr:
I0624 03:20:51.203476    5816 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:51.203621    5816 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.203624    5816 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:51.203626    5816 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.203792    5816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:51.204205    5816 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.204281    5816 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-880000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-880000 image ls --format json --alsologtostderr:
I0624 03:20:51.169424    5814 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:51.169589    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.169592    5814 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:51.169594    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.169713    5814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:51.170156    5814 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.170215    5814 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-880000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-880000 image ls --format yaml --alsologtostderr:
I0624 03:20:51.097523    5810 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:51.097700    5810 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.097703    5810 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:51.097705    5810 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.097829    5810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:51.098310    5810 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.098373    5810 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh pgrep buildkitd: exit status 83 (41.094083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image build -t localhost/my-image:functional-880000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-880000 image build -t localhost/my-image:functional-880000 testdata/build --alsologtostderr:
I0624 03:20:51.281122    5820 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:51.281762    5820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.281767    5820 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:51.281770    5820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:51.282000    5820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:51.282598    5820 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.283049    5820 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:51.283280    5820 build_images.go:133] succeeded building to: 
I0624 03:20:51.283284    5820 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
functional_test.go:442: expected "localhost/my-image:functional-880000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-880000 docker-env) && out/minikube-darwin-arm64 status -p functional-880000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-880000 docker-env) && out/minikube-darwin-arm64 status -p functional-880000": exit status 1 (43.432958ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2: exit status 83 (40.67925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:20:50.972398    5804 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:20:50.973265    5804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.973269    5804 out.go:304] Setting ErrFile to fd 2...
	I0624 03:20:50.973271    5804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.973473    5804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:20:50.973683    5804 mustload.go:65] Loading cluster: functional-880000
	I0624 03:20:50.973872    5804 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:20:50.975504    5804 out.go:177] * The control-plane node functional-880000 host is not running: state=Stopped
	I0624 03:20:50.979547    5804 out.go:177]   To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2: exit status 83 (41.605333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:20:51.055432    5808 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:20:51.055590    5808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:51.055593    5808 out.go:304] Setting ErrFile to fd 2...
	I0624 03:20:51.055595    5808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:51.055717    5808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:20:51.055945    5808 mustload.go:65] Loading cluster: functional-880000
	I0624 03:20:51.056147    5808 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:20:51.060424    5808 out.go:177] * The control-plane node functional-880000 host is not running: state=Stopped
	I0624 03:20:51.064607    5808 out.go:177]   To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2: exit status 83 (42.6695ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:20:51.013063    5806 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:20:51.013212    5806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:51.013215    5806 out.go:304] Setting ErrFile to fd 2...
	I0624 03:20:51.013217    5806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:51.013354    5806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:20:51.013572    5806 mustload.go:65] Loading cluster: functional-880000
	I0624 03:20:51.013758    5806 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:20:51.018600    5806 out.go:177] * The control-plane node functional-880000 host is not running: state=Stopped
	I0624 03:20:51.022570    5806 out.go:177]   To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-880000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-880000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-880000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.346542ms)

                                                
                                                
** stderr ** 
	error: context "functional-880000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-880000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 service list: exit status 83 (42.880917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-880000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 service list -o json: exit status 83 (39.827416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-880000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 service --namespace=default --https --url hello-node: exit status 83 (39.743375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-880000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 service hello-node --url --format={{.IP}}: exit status 83 (42.477458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-880000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 service hello-node --url: exit status 83 (40.730083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-880000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test.go:1565: failed to parse "* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"": parse "* The control-plane node functional-880000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-880000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0624 03:20:05.923616    5579 out.go:291] Setting OutFile to fd 1 ...
I0624 03:20:05.923784    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:05.923788    5579 out.go:304] Setting ErrFile to fd 2...
I0624 03:20:05.923790    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:20:05.923920    5579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:20:05.924154    5579 mustload.go:65] Loading cluster: functional-880000
I0624 03:20:05.924338    5579 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:20:05.929496    5579 out.go:177] * The control-plane node functional-880000 host is not running: state=Stopped
I0624 03:20:05.940456    5579 out.go:177]   To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
stdout: * The control-plane node functional-880000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-880000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 5578: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-880000": client config: context "functional-880000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-880000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-880000 get svc nginx-svc: exit status 1 (71.147417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-880000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-880000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr: (1.272099s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-880000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr: (1.314151334s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-880000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.921448625s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-880000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-880000 image load --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr: (1.171346s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-880000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image save gcr.io/google-containers/addon-resizer:functional-880000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-880000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036798542s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.998728084s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:22:38.095592    5864 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:22:38.095726    5864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:22:38.095729    5864 out.go:304] Setting ErrFile to fd 2...
	I0624 03:22:38.095731    5864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:22:38.095860    5864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:22:38.096959    5864 out.go:298] Setting JSON to false
	I0624 03:22:38.112909    5864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4928,"bootTime":1719219630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:22:38.112972    5864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:22:38.118771    5864 out.go:177] * [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:22:38.126680    5864 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:22:38.126736    5864 notify.go:220] Checking for updates...
	I0624 03:22:38.135602    5864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:22:38.138709    5864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:22:38.141632    5864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:22:38.144674    5864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:22:38.147678    5864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:22:38.150766    5864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:22:38.154609    5864 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:22:38.160531    5864 start.go:297] selected driver: qemu2
	I0624 03:22:38.160536    5864 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:22:38.160541    5864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:22:38.162785    5864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:22:38.165634    5864 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:22:38.168712    5864 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:22:38.168735    5864 cni.go:84] Creating CNI manager for ""
	I0624 03:22:38.168739    5864 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0624 03:22:38.168744    5864 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 03:22:38.168780    5864 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:22:38.173354    5864 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:22:38.181661    5864 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0624 03:22:38.185686    5864 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:22:38.185703    5864 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:22:38.185709    5864 cache.go:56] Caching tarball of preloaded images
	I0624 03:22:38.185772    5864 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:22:38.185779    5864 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:22:38.185973    5864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/ha-688000/config.json ...
	I0624 03:22:38.185984    5864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/ha-688000/config.json: {Name:mk55f17064bfe04dac2842d98caa0e123d4d320c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:22:38.186292    5864 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:22:38.186323    5864 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "ha-688000"
	I0624 03:22:38.186332    5864 start.go:93] Provisioning new machine with config: &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:22:38.186355    5864 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:22:38.193693    5864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:22:38.209137    5864 start.go:159] libmachine.API.Create for "ha-688000" (driver="qemu2")
	I0624 03:22:38.209171    5864 client.go:168] LocalClient.Create starting
	I0624 03:22:38.209235    5864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:22:38.209267    5864 main.go:141] libmachine: Decoding PEM data...
	I0624 03:22:38.209275    5864 main.go:141] libmachine: Parsing certificate...
	I0624 03:22:38.209319    5864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:22:38.209354    5864 main.go:141] libmachine: Decoding PEM data...
	I0624 03:22:38.209365    5864 main.go:141] libmachine: Parsing certificate...
	I0624 03:22:38.209759    5864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:22:38.525678    5864 main.go:141] libmachine: Creating SSH key...
	I0624 03:22:38.644802    5864 main.go:141] libmachine: Creating Disk image...
	I0624 03:22:38.644807    5864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:22:38.645019    5864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:38.657780    5864 main.go:141] libmachine: STDOUT: 
	I0624 03:22:38.657797    5864 main.go:141] libmachine: STDERR: 
	I0624 03:22:38.657855    5864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2 +20000M
	I0624 03:22:38.668731    5864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:22:38.668746    5864 main.go:141] libmachine: STDERR: 
	I0624 03:22:38.668763    5864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:38.668768    5864 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:22:38.668796    5864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:23:a3:ff:9f:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:38.670466    5864 main.go:141] libmachine: STDOUT: 
	I0624 03:22:38.670482    5864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:22:38.670510    5864 client.go:171] duration metric: took 461.328042ms to LocalClient.Create
	I0624 03:22:40.672679    5864 start.go:128] duration metric: took 2.486317708s to createHost
	I0624 03:22:40.672783    5864 start.go:83] releasing machines lock for "ha-688000", held for 2.486450208s
	W0624 03:22:40.672856    5864 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:22:40.687181    5864 out.go:177] * Deleting "ha-688000" in qemu2 ...
	W0624 03:22:40.714357    5864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:22:40.714377    5864 start.go:728] Will try again in 5 seconds ...
	I0624 03:22:45.716586    5864 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:22:45.717006    5864 start.go:364] duration metric: took 351.292µs to acquireMachinesLock for "ha-688000"
	I0624 03:22:45.717132    5864 start.go:93] Provisioning new machine with config: &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:22:45.717462    5864 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:22:45.733241    5864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:22:45.782694    5864 start.go:159] libmachine.API.Create for "ha-688000" (driver="qemu2")
	I0624 03:22:45.782742    5864 client.go:168] LocalClient.Create starting
	I0624 03:22:45.782848    5864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:22:45.782915    5864 main.go:141] libmachine: Decoding PEM data...
	I0624 03:22:45.782934    5864 main.go:141] libmachine: Parsing certificate...
	I0624 03:22:45.783000    5864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:22:45.783045    5864 main.go:141] libmachine: Decoding PEM data...
	I0624 03:22:45.783075    5864 main.go:141] libmachine: Parsing certificate...
	I0624 03:22:45.783634    5864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:22:45.937966    5864 main.go:141] libmachine: Creating SSH key...
	I0624 03:22:45.994973    5864 main.go:141] libmachine: Creating Disk image...
	I0624 03:22:45.994978    5864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:22:45.995168    5864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:46.007948    5864 main.go:141] libmachine: STDOUT: 
	I0624 03:22:46.007975    5864 main.go:141] libmachine: STDERR: 
	I0624 03:22:46.008030    5864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2 +20000M
	I0624 03:22:46.019065    5864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:22:46.019081    5864 main.go:141] libmachine: STDERR: 
	I0624 03:22:46.019105    5864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:46.019110    5864 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:22:46.019137    5864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dc:13:5f:9b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:22:46.020934    5864 main.go:141] libmachine: STDOUT: 
	I0624 03:22:46.020949    5864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:22:46.020967    5864 client.go:171] duration metric: took 238.221875ms to LocalClient.Create
	I0624 03:22:48.023125    5864 start.go:128] duration metric: took 2.305633417s to createHost
	I0624 03:22:48.023233    5864 start.go:83] releasing machines lock for "ha-688000", held for 2.306196708s
	W0624 03:22:48.023627    5864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:22:48.034222    5864 out.go:177] 
	W0624 03:22:48.040318    5864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:22:48.040372    5864 out.go:239] * 
	* 
	W0624 03:22:48.042896    5864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:22:48.052192    5864 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (69.752333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (95.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.973791ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-688000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox: exit status 1 (56.692791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.190125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.909084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.841916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.312083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.055916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.012083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.563708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.500542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.375667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.644791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.081459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.222042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.149666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.0625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.060458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.646084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (95.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.273792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.055583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr: exit status 83 (42.918167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:23.918618    5956 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:23.919190    5956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:23.919194    5956 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:23.919196    5956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:23.919365    5956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:23.919597    5956 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:23.919778    5956 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:23.924137    5956 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0624 03:24:23.928119    5956 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.514458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.767625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-688000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-688000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-688000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.881041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.371792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr: exit status 7 (29.427458ms)

                                                
                                                
-- stdout --
	{"Name":"ha-688000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:24.149778    5969 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:24.149932    5969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.149935    5969 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:24.149937    5969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.150062    5969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:24.150175    5969 out.go:298] Setting JSON to true
	I0624 03:24:24.150187    5969 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:24.150253    5969 notify.go:220] Checking for updates...
	I0624 03:24:24.150381    5969 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:24.150386    5969 status.go:255] checking status of ha-688000 ...
	I0624 03:24:24.150584    5969 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:24.150588    5969 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:24.150590    5969 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.864375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr: exit status 85 (45.129416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:24.209557    5973 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:24.210150    5973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.210154    5973 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:24.210156    5973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.210332    5973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:24.210561    5973 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:24.210780    5973 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:24.214232    5973 out.go:177] 
	W0624 03:24:24.217190    5973 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0624 03:24:24.217194    5973 out.go:239] * 
	* 
	W0624 03:24:24.219112    5973 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:24:24.222180    5973 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (29.797625ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:24.255352    5975 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:24.255500    5975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.255503    5975 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:24.255505    5975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.255638    5975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:24.255746    5975 out.go:298] Setting JSON to false
	I0624 03:24:24.255756    5975 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:24.255812    5975 notify.go:220] Checking for updates...
	I0624 03:24:24.255951    5975 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:24.255956    5975 status.go:255] checking status of ha-688000 ...
	I0624 03:24:24.256181    5975 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:24.256185    5975 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:24.256187    5975 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.353833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.123042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.332083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:24.419875    5985 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:24.420264    5985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.420268    5985 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:24.420270    5985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.420408    5985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:24.420674    5985 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:24.420855    5985 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:24.425457    5985 out.go:177] 
	W0624 03:24:24.428446    5985 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0624 03:24:24.428451    5985 out.go:239] * 
	* 
	W0624 03:24:24.430328    5985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:24:24.433440    5985 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0624 03:24:24.419875    5985 out.go:291] Setting OutFile to fd 1 ...
I0624 03:24:24.420264    5985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:24:24.420268    5985 out.go:304] Setting ErrFile to fd 2...
I0624 03:24:24.420270    5985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:24:24.420408    5985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:24:24.420674    5985 mustload.go:65] Loading cluster: ha-688000
I0624 03:24:24.420855    5985 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:24:24.425457    5985 out.go:177] 
W0624 03:24:24.428446    5985 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0624 03:24:24.428451    5985 out.go:239] * 
* 
W0624 03:24:24.430328    5985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0624 03:24:24.433440    5985 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (30.050167ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:24.465493    5987 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:24.465672    5987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.465675    5987 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:24.465682    5987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:24.465813    5987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:24.465931    5987 out.go:298] Setting JSON to false
	I0624 03:24:24.465941    5987 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:24.466001    5987 notify.go:220] Checking for updates...
	I0624 03:24:24.466142    5987 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:24.466149    5987 status.go:255] checking status of ha-688000 ...
	I0624 03:24:24.466346    5987 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:24.466349    5987 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:24.466352    5987 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.663292ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:25.373753    5989 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:25.373973    5989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:25.373977    5989 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:25.373980    5989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:25.374164    5989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:25.374319    5989 out.go:298] Setting JSON to false
	I0624 03:24:25.374335    5989 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:25.374373    5989 notify.go:220] Checking for updates...
	I0624 03:24:25.374595    5989 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:25.374601    5989 status.go:255] checking status of ha-688000 ...
	I0624 03:24:25.374894    5989 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:25.374899    5989 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:25.374902    5989 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.8955ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:26.205273    5991 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:26.205500    5991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:26.205505    5991 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:26.205508    5991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:26.205690    5991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:26.205862    5991 out.go:298] Setting JSON to false
	I0624 03:24:26.205877    5991 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:26.205925    5991 notify.go:220] Checking for updates...
	I0624 03:24:26.207324    5991 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:26.207335    5991 status.go:255] checking status of ha-688000 ...
	I0624 03:24:26.207603    5991 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:26.207609    5991 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:26.207612    5991 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (73.976875ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:29.438094    5993 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:29.438303    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:29.438307    5993 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:29.438310    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:29.438484    5993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:29.438645    5993 out.go:298] Setting JSON to false
	I0624 03:24:29.438657    5993 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:29.438694    5993 notify.go:220] Checking for updates...
	I0624 03:24:29.438924    5993 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:29.438933    5993 status.go:255] checking status of ha-688000 ...
	I0624 03:24:29.439225    5993 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:29.439230    5993 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:29.439233    5993 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (73.505583ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:31.636531    5999 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:31.636735    5999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:31.636739    5999 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:31.636742    5999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:31.636916    5999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:31.637090    5999 out.go:298] Setting JSON to false
	I0624 03:24:31.637107    5999 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:31.637146    5999 notify.go:220] Checking for updates...
	I0624 03:24:31.637350    5999 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:31.637356    5999 status.go:255] checking status of ha-688000 ...
	I0624 03:24:31.637652    5999 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:31.637657    5999 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:31.637660    5999 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (72.5225ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:36.545901    6001 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:36.546086    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:36.546090    6001 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:36.546094    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:36.546276    6001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:36.546435    6001 out.go:298] Setting JSON to false
	I0624 03:24:36.546448    6001 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:36.546489    6001 notify.go:220] Checking for updates...
	I0624 03:24:36.546695    6001 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:36.546702    6001 status.go:255] checking status of ha-688000 ...
	I0624 03:24:36.546976    6001 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:36.546981    6001 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:36.546983    6001 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (72.7085ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:41.892599    6007 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:41.892815    6007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:41.892819    6007 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:41.892821    6007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:41.892998    6007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:41.893166    6007 out.go:298] Setting JSON to false
	I0624 03:24:41.893179    6007 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:41.893221    6007 notify.go:220] Checking for updates...
	I0624 03:24:41.893442    6007 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:41.893449    6007 status.go:255] checking status of ha-688000 ...
	I0624 03:24:41.893728    6007 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:41.893733    6007 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:41.893736    6007 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (76.126125ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:24:51.857331    6011 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:24:51.857572    6011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:51.857576    6011 out.go:304] Setting ErrFile to fd 2...
	I0624 03:24:51.857580    6011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:24:51.857779    6011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:24:51.857954    6011 out.go:298] Setting JSON to false
	I0624 03:24:51.857968    6011 mustload.go:65] Loading cluster: ha-688000
	I0624 03:24:51.858011    6011 notify.go:220] Checking for updates...
	I0624 03:24:51.858215    6011 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:51.858222    6011 status.go:255] checking status of ha-688000 ...
	I0624 03:24:51.858503    6011 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:24:51.858508    6011 status.go:343] host is not running, skipping remaining checks
	I0624 03:24:51.858511    6011 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (70.625542ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:16.672958    6013 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:16.673147    6013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:16.673157    6013 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:16.673161    6013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:16.673333    6013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:16.673489    6013 out.go:298] Setting JSON to false
	I0624 03:25:16.673503    6013 mustload.go:65] Loading cluster: ha-688000
	I0624 03:25:16.673546    6013 notify.go:220] Checking for updates...
	I0624 03:25:16.673767    6013 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:16.673775    6013 status.go:255] checking status of ha-688000 ...
	I0624 03:25:16.674033    6013 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:25:16.674038    6013 status.go:343] host is not running, skipping remaining checks
	I0624 03:25:16.674041    6013 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (33.131417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.25325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr: (3.50336525s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.220453667s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:20.406315    6045 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:20.406478    6045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:20.406483    6045 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:20.406485    6045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:20.406642    6045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:20.407887    6045 out.go:298] Setting JSON to false
	I0624 03:25:20.426980    6045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5090,"bootTime":1719219630,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:25:20.427043    6045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:25:20.431016    6045 out.go:177] * [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:25:20.437784    6045 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:25:20.437816    6045 notify.go:220] Checking for updates...
	I0624 03:25:20.443920    6045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:25:20.446790    6045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:25:20.450857    6045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:25:20.453892    6045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:25:20.456835    6045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:25:20.460048    6045 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:20.460110    6045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:25:20.463868    6045 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:25:20.470762    6045 start.go:297] selected driver: qemu2
	I0624 03:25:20.470768    6045 start.go:901] validating driver "qemu2" against &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:25:20.470812    6045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:25:20.473140    6045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:25:20.473191    6045 cni.go:84] Creating CNI manager for ""
	I0624 03:25:20.473196    6045 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 03:25:20.473245    6045 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:25:20.477620    6045 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:25:20.485726    6045 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0624 03:25:20.489859    6045 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:25:20.489876    6045 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:25:20.489887    6045 cache.go:56] Caching tarball of preloaded images
	I0624 03:25:20.489956    6045 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:25:20.489962    6045 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:25:20.490029    6045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/ha-688000/config.json ...
	I0624 03:25:20.490492    6045 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:25:20.490529    6045 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "ha-688000"
	I0624 03:25:20.490538    6045 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:25:20.490545    6045 fix.go:54] fixHost starting: 
	I0624 03:25:20.490667    6045 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0624 03:25:20.490678    6045 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:25:20.498838    6045 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0624 03:25:20.502660    6045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dc:13:5f:9b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:25:20.504748    6045 main.go:141] libmachine: STDOUT: 
	I0624 03:25:20.504769    6045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:25:20.504810    6045 fix.go:56] duration metric: took 14.253667ms for fixHost
	I0624 03:25:20.504814    6045 start.go:83] releasing machines lock for "ha-688000", held for 14.280667ms
	W0624 03:25:20.504821    6045 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:25:20.504853    6045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:25:20.504858    6045 start.go:728] Will try again in 5 seconds ...
	I0624 03:25:25.507008    6045 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:25:25.507360    6045 start.go:364] duration metric: took 266.291µs to acquireMachinesLock for "ha-688000"
	I0624 03:25:25.507484    6045 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:25:25.507501    6045 fix.go:54] fixHost starting: 
	I0624 03:25:25.508229    6045 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0624 03:25:25.508257    6045 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:25:25.512716    6045 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0624 03:25:25.516885    6045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dc:13:5f:9b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:25:25.525755    6045 main.go:141] libmachine: STDOUT: 
	I0624 03:25:25.525827    6045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:25:25.525914    6045 fix.go:56] duration metric: took 18.411375ms for fixHost
	I0624 03:25:25.525938    6045 start.go:83] releasing machines lock for "ha-688000", held for 18.552542ms
	W0624 03:25:25.526170    6045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:25:25.533680    6045 out.go:177] 
	W0624 03:25:25.537660    6045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:25:25.537681    6045 out.go:239] * 
	* 
	W0624 03:25:25.540288    6045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:25:25.548651    6045 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (33.158917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.197333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:25.693246    6057 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:25.693855    6057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:25.693858    6057 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:25.693861    6057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:25.694039    6057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:25.694245    6057 mustload.go:65] Loading cluster: ha-688000
	I0624 03:25:25.694454    6057 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:25.698769    6057 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0624 03:25:25.699887    6057 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (30.482708ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:25.731434    6059 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:25.731597    6059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:25.731601    6059 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:25.731603    6059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:25.731742    6059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:25.731857    6059 out.go:298] Setting JSON to false
	I0624 03:25:25.731866    6059 mustload.go:65] Loading cluster: ha-688000
	I0624 03:25:25.731937    6059 notify.go:220] Checking for updates...
	I0624 03:25:25.732539    6059 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:25.732650    6059 status.go:255] checking status of ha-688000 ...
	I0624 03:25:25.733097    6059 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:25:25.733105    6059 status.go:343] host is not running, skipping remaining checks
	I0624 03:25:25.733108    6059 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.649416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.594583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr: (3.138383958s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (69.846625ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:29.071515    6087 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:29.071711    6087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:29.071715    6087 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:29.071718    6087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:29.071900    6087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:29.072047    6087 out.go:298] Setting JSON to false
	I0624 03:25:29.072059    6087 mustload.go:65] Loading cluster: ha-688000
	I0624 03:25:29.072095    6087 notify.go:220] Checking for updates...
	I0624 03:25:29.072309    6087 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:29.072316    6087 status.go:255] checking status of ha-688000 ...
	I0624 03:25:29.072601    6087 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0624 03:25:29.072605    6087 status.go:343] host is not running, skipping remaining checks
	I0624 03:25:29.072608    6087 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.987709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1862925s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:29.135226    6091 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:29.135376    6091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:29.135380    6091 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:29.135382    6091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:29.135484    6091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:29.136487    6091 out.go:298] Setting JSON to false
	I0624 03:25:29.152239    6091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5099,"bootTime":1719219630,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:25:29.152310    6091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:25:29.156480    6091 out.go:177] * [ha-688000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:25:29.164506    6091 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:25:29.164557    6091 notify.go:220] Checking for updates...
	I0624 03:25:29.172451    6091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:25:29.175413    6091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:25:29.179331    6091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:25:29.182437    6091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:25:29.185434    6091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:25:29.188682    6091 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:29.188965    6091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:25:29.192501    6091 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:25:29.199383    6091 start.go:297] selected driver: qemu2
	I0624 03:25:29.199389    6091 start.go:901] validating driver "qemu2" against &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:25:29.199430    6091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:25:29.201664    6091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:25:29.201705    6091 cni.go:84] Creating CNI manager for ""
	I0624 03:25:29.201709    6091 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 03:25:29.201761    6091 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:25:29.206105    6091 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:25:29.213334    6091 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0624 03:25:29.217397    6091 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:25:29.217417    6091 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:25:29.217427    6091 cache.go:56] Caching tarball of preloaded images
	I0624 03:25:29.217484    6091 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:25:29.217490    6091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:25:29.217548    6091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/ha-688000/config.json ...
	I0624 03:25:29.217994    6091 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:25:29.218023    6091 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "ha-688000"
	I0624 03:25:29.218032    6091 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:25:29.218039    6091 fix.go:54] fixHost starting: 
	I0624 03:25:29.218159    6091 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0624 03:25:29.218167    6091 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:25:29.226386    6091 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0624 03:25:29.230475    6091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dc:13:5f:9b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:25:29.232530    6091 main.go:141] libmachine: STDOUT: 
	I0624 03:25:29.232549    6091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:25:29.232578    6091 fix.go:56] duration metric: took 14.539959ms for fixHost
	I0624 03:25:29.232582    6091 start.go:83] releasing machines lock for "ha-688000", held for 14.5545ms
	W0624 03:25:29.232589    6091 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:25:29.232629    6091 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:25:29.232634    6091 start.go:728] Will try again in 5 seconds ...
	I0624 03:25:34.234826    6091 start.go:360] acquireMachinesLock for ha-688000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:25:34.235151    6091 start.go:364] duration metric: took 260.291µs to acquireMachinesLock for "ha-688000"
	I0624 03:25:34.235260    6091 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:25:34.235278    6091 fix.go:54] fixHost starting: 
	I0624 03:25:34.235934    6091 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0624 03:25:34.235961    6091 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:25:34.244289    6091 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0624 03:25:34.248536    6091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dc:13:5f:9b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/ha-688000/disk.qcow2
	I0624 03:25:34.257211    6091 main.go:141] libmachine: STDOUT: 
	I0624 03:25:34.257276    6091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:25:34.257341    6091 fix.go:56] duration metric: took 22.064625ms for fixHost
	I0624 03:25:34.257360    6091 start.go:83] releasing machines lock for "ha-688000", held for 22.186417ms
	W0624 03:25:34.257530    6091 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:25:34.265222    6091 out.go:177] 
	W0624 03:25:34.269343    6091 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:25:34.269390    6091 out.go:239] * 
	* 
	W0624 03:25:34.272098    6091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:25:34.280358    6091 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (70.978166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.97275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.436417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:25:34.498189    6107 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:25:34.498336    6107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:34.498339    6107 out.go:304] Setting ErrFile to fd 2...
	I0624 03:25:34.498342    6107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:25:34.498486    6107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:25:34.498722    6107 mustload.go:65] Loading cluster: ha-688000
	I0624 03:25:34.498908    6107 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:25:34.503158    6107 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0624 03:25:34.506062    6107 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (30.052167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.653916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-652000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-652000 --driver=qemu2 : exit status 80 (9.900502167s)

                                                
                                                
-- stdout --
	* [image-652000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-652000" primary control-plane node in "image-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-652000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-652000 -n image-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-652000 -n image-652000: exit status 7 (67.792209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-652000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-672000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-672000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.719585167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ba9bb3c-3f42-46e8-8dab-c0548b170d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-672000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbf83452-7f38-44a5-9cb3-a0df90796425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19124"}}
	{"specversion":"1.0","id":"79f3bcf1-cad0-4e11-a2fb-84138d1e8cb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig"}}
	{"specversion":"1.0","id":"50891420-ca6e-4737-be7f-b4a920f321da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"27f85183-d761-4f39-bf0b-2eb4c29a5d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a9cbf4ad-a4bb-4191-a632-0a54a69277a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube"}}
	{"specversion":"1.0","id":"875f0bce-679e-48d9-9a4a-cc6f79174ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cae49492-9741-4afc-a344-9f829b83dd08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a13d579-b93d-4b23-a6ae-78285faa6fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"362018a0-5774-4dc8-a3b8-0dfcaf4640bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-672000\" primary control-plane node in \"json-output-672000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc6acdbd-b0eb-4e51-aea3-5ade5310389a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"dc6d5c3d-d131-441c-9855-62ddc3289420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-672000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e89eaa67-6f00-4e2a-b2dd-6192dbf29a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"4480e9d4-f964-4582-80f5-a0b4324685df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d4099a37-2769-4cfe-a3c6-86c707ff6ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-672000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f61cf2d2-cc9d-4c3a-a232-03e9deff9d7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"ae09c503-69b8-4caf-8f7b-61ee48b84eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-672000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-672000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-672000 --output=json --user=testUser: exit status 83 (78.50775ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"abefb59c-1c42-4a3f-9085-8cb9a6c9a5c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-672000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"f0054109-7283-45e8-adad-6ab8a0ff7140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-672000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-672000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-672000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-672000 --output=json --user=testUser: exit status 83 (44.20025ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-672000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-672000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-672000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-672000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-840000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-840000 --driver=qemu2 : exit status 80 (9.737800041s)

                                                
                                                
-- stdout --
	* [first-840000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-840000" primary control-plane node in "first-840000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-840000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-840000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-840000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-24 03:26:06.973199 -0700 PDT m=+441.432101585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-842000 -n second-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-842000 -n second-842000: exit status 85 (79.308292ms)

                                                
                                                
-- stdout --
	* Profile "second-842000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-842000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-842000" host is not running, skipping log retrieval (state="* Profile \"second-842000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-842000\"")
helpers_test.go:175: Cleaning up "second-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-842000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-24 03:26:07.278784 -0700 PDT m=+441.737689043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-840000 -n first-840000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-840000 -n first-840000: exit status 7 (29.422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-840000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-840000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-840000
--- FAIL: TestMinikubeProfile (10.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.925651666s)

                                                
                                                
-- stdout --
	* [mount-start-1-475000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-475000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-475000 -n mount-start-1-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-475000 -n mount-start-1-475000: exit status 7 (68.834167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-913000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-913000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.893092084s)

                                                
                                                
-- stdout --
	* [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-913000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:26:17.754872    6273 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:26:17.755005    6273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:26:17.755009    6273 out.go:304] Setting ErrFile to fd 2...
	I0624 03:26:17.755011    6273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:26:17.755173    6273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:26:17.756224    6273 out.go:298] Setting JSON to false
	I0624 03:26:17.772124    6273 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5147,"bootTime":1719219630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:26:17.772186    6273 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:26:17.779078    6273 out.go:177] * [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:26:17.787040    6273 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:26:17.787095    6273 notify.go:220] Checking for updates...
	I0624 03:26:17.794022    6273 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:26:17.796922    6273 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:26:17.800980    6273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:26:17.804017    6273 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:26:17.807029    6273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:26:17.810091    6273 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:26:17.813103    6273 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:26:17.819985    6273 start.go:297] selected driver: qemu2
	I0624 03:26:17.819990    6273 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:26:17.819996    6273 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:26:17.822255    6273 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:26:17.826002    6273 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:26:17.829085    6273 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:26:17.829135    6273 cni.go:84] Creating CNI manager for ""
	I0624 03:26:17.829140    6273 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0624 03:26:17.829144    6273 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 03:26:17.829203    6273 start.go:340] cluster config:
	{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:26:17.833679    6273 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:26:17.854049    6273 out.go:177] * Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	I0624 03:26:17.857975    6273 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:26:17.858002    6273 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:26:17.858008    6273 cache.go:56] Caching tarball of preloaded images
	I0624 03:26:17.858068    6273 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:26:17.858073    6273 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:26:17.858308    6273 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/multinode-913000/config.json ...
	I0624 03:26:17.858321    6273 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/multinode-913000/config.json: {Name:mk9d1a9f8a77e647451f8be133f7fbc19cc34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:26:17.858560    6273 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:26:17.858599    6273 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "multinode-913000"
	I0624 03:26:17.858611    6273 start.go:93] Provisioning new machine with config: &{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:26:17.858645    6273 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:26:17.861947    6273 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:26:17.880068    6273 start.go:159] libmachine.API.Create for "multinode-913000" (driver="qemu2")
	I0624 03:26:17.880090    6273 client.go:168] LocalClient.Create starting
	I0624 03:26:17.880153    6273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:26:17.880183    6273 main.go:141] libmachine: Decoding PEM data...
	I0624 03:26:17.880198    6273 main.go:141] libmachine: Parsing certificate...
	I0624 03:26:17.880235    6273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:26:17.880259    6273 main.go:141] libmachine: Decoding PEM data...
	I0624 03:26:17.880267    6273 main.go:141] libmachine: Parsing certificate...
	I0624 03:26:17.880624    6273 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:26:18.025672    6273 main.go:141] libmachine: Creating SSH key...
	I0624 03:26:18.133617    6273 main.go:141] libmachine: Creating Disk image...
	I0624 03:26:18.133623    6273 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:26:18.133821    6273 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:18.146404    6273 main.go:141] libmachine: STDOUT: 
	I0624 03:26:18.146424    6273 main.go:141] libmachine: STDERR: 
	I0624 03:26:18.146485    6273 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2 +20000M
	I0624 03:26:18.157402    6273 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:26:18.157419    6273 main.go:141] libmachine: STDERR: 
	I0624 03:26:18.157434    6273 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:18.157439    6273 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:26:18.157471    6273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:17:7d:27:2e:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:18.159154    6273 main.go:141] libmachine: STDOUT: 
	I0624 03:26:18.159167    6273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:26:18.159187    6273 client.go:171] duration metric: took 279.093125ms to LocalClient.Create
	I0624 03:26:20.161500    6273 start.go:128] duration metric: took 2.302829417s to createHost
	I0624 03:26:20.161601    6273 start.go:83] releasing machines lock for "multinode-913000", held for 2.303007208s
	W0624 03:26:20.161647    6273 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:26:20.177975    6273 out.go:177] * Deleting "multinode-913000" in qemu2 ...
	W0624 03:26:20.206488    6273 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:26:20.206519    6273 start.go:728] Will try again in 5 seconds ...
	I0624 03:26:25.208154    6273 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:26:25.208650    6273 start.go:364] duration metric: took 380.416µs to acquireMachinesLock for "multinode-913000"
	I0624 03:26:25.208791    6273 start.go:93] Provisioning new machine with config: &{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:26:25.209082    6273 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:26:25.224752    6273 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:26:25.274445    6273 start.go:159] libmachine.API.Create for "multinode-913000" (driver="qemu2")
	I0624 03:26:25.274489    6273 client.go:168] LocalClient.Create starting
	I0624 03:26:25.274581    6273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:26:25.274636    6273 main.go:141] libmachine: Decoding PEM data...
	I0624 03:26:25.274654    6273 main.go:141] libmachine: Parsing certificate...
	I0624 03:26:25.274727    6273 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:26:25.274778    6273 main.go:141] libmachine: Decoding PEM data...
	I0624 03:26:25.274796    6273 main.go:141] libmachine: Parsing certificate...
	I0624 03:26:25.275305    6273 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:26:25.427460    6273 main.go:141] libmachine: Creating SSH key...
	I0624 03:26:25.550851    6273 main.go:141] libmachine: Creating Disk image...
	I0624 03:26:25.550856    6273 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:26:25.551079    6273 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:25.563898    6273 main.go:141] libmachine: STDOUT: 
	I0624 03:26:25.563919    6273 main.go:141] libmachine: STDERR: 
	I0624 03:26:25.563981    6273 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2 +20000M
	I0624 03:26:25.574839    6273 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:26:25.574860    6273 main.go:141] libmachine: STDERR: 
	I0624 03:26:25.574885    6273 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:25.574889    6273 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:26:25.574931    6273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:31:d7:ec:45:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:26:25.576604    6273 main.go:141] libmachine: STDOUT: 
	I0624 03:26:25.576619    6273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:26:25.576632    6273 client.go:171] duration metric: took 302.286208ms to LocalClient.Create
	I0624 03:26:27.577872    6273 start.go:128] duration metric: took 2.369872792s to createHost
	I0624 03:26:27.577921    6273 start.go:83] releasing machines lock for "multinode-913000", held for 2.370361542s
	W0624 03:26:27.578310    6273 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:26:27.592081    6273 out.go:177] 
	W0624 03:26:27.596149    6273 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:26:27.596175    6273 out.go:239] * 
	* 
	W0624 03:26:27.598720    6273 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:26:27.604988    6273 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-913000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (69.079416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.212208ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-913000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- rollout status deployment/busybox: exit status 1 (56.708917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.624583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.966083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.091541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.140459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.4825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.591042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.450584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.355542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.307708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.616458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.740125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.620125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.44275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.54475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.393083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (30.028416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-913000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.137125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (30.755708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-913000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-913000 -v 3 --alsologtostderr: exit status 83 (42.622416ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-913000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-913000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:23.738847    6371 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:23.738992    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:23.738995    6371 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:23.738998    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:23.739125    6371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:23.739353    6371 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:23.739529    6371 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:23.744614    6371 out.go:177] * The control-plane node multinode-913000 host is not running: state=Stopped
	I0624 03:28:23.748489    6371 out.go:177]   To start a cluster, run: "minikube start -p multinode-913000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-913000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (29.806459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-913000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-913000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.703417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-913000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-913000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-913000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (29.89675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-913000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-913000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-913000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-913000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (29.391541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status --output json --alsologtostderr: exit status 7 (30.216958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-913000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:23.967698    6384 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:23.968066    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:23.968070    6384 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:23.968073    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:23.968273    6384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:23.968430    6384 out.go:298] Setting JSON to true
	I0624 03:28:23.968441    6384 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:23.968470    6384 notify.go:220] Checking for updates...
	I0624 03:28:23.968849    6384 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:23.968856    6384 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:23.969050    6384 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:23.969054    6384 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:23.969056    6384 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-913000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (30.057542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 node stop m03: exit status 85 (43.750167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-913000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status: exit status 7 (29.565958ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr: exit status 7 (29.321625ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:24.101769    6392 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:24.101921    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.101924    6392 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:24.101930    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.102047    6392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:24.102154    6392 out.go:298] Setting JSON to false
	I0624 03:28:24.102164    6392 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:24.102228    6392 notify.go:220] Checking for updates...
	I0624 03:28:24.102375    6392 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:24.102380    6392 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:24.102602    6392 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:24.102606    6392 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:24.102608    6392 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr": multinode-913000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (29.928334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.162167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:24.162600    6396 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:24.162996    6396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.163000    6396 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:24.163002    6396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.163170    6396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:24.163400    6396 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:24.163574    6396 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:24.167114    6396 out.go:177] 
	W0624 03:28:24.171003    6396 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0624 03:28:24.171008    6396 out.go:239] * 
	* 
	W0624 03:28:24.172915    6396 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:28:24.176952    6396 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0624 03:28:24.162600    6396 out.go:291] Setting OutFile to fd 1 ...
I0624 03:28:24.162996    6396 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:28:24.163000    6396 out.go:304] Setting ErrFile to fd 2...
I0624 03:28:24.163002    6396 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 03:28:24.163170    6396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
I0624 03:28:24.163400    6396 mustload.go:65] Loading cluster: multinode-913000
I0624 03:28:24.163574    6396 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 03:28:24.167114    6396 out.go:177] 
W0624 03:28:24.171003    6396 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0624 03:28:24.171008    6396 out.go:239] * 
* 
W0624 03:28:24.172915    6396 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0624 03:28:24.176952    6396 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-913000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (29.998916ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:24.210331    6398 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:24.210491    6398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.210494    6398 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:24.210496    6398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:24.210641    6398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:24.210755    6398 out.go:298] Setting JSON to false
	I0624 03:28:24.210770    6398 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:24.210811    6398 notify.go:220] Checking for updates...
	I0624 03:28:24.210967    6398 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:24.210972    6398 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:24.211170    6398 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:24.211174    6398 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:24.211176    6398 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (75.007125ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:25.168375    6402 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:25.168570    6402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:25.168574    6402 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:25.168578    6402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:25.168754    6402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:25.168905    6402 out.go:298] Setting JSON to false
	I0624 03:28:25.168917    6402 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:25.168957    6402 notify.go:220] Checking for updates...
	I0624 03:28:25.169160    6402 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:25.169167    6402 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:25.169439    6402 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:25.169444    6402 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:25.169447    6402 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (75.826ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:26.720930    6404 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:26.721135    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:26.721139    6404 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:26.721142    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:26.721307    6404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:26.721856    6404 out.go:298] Setting JSON to false
	I0624 03:28:26.721881    6404 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:26.722288    6404 notify.go:220] Checking for updates...
	I0624 03:28:26.722420    6404 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:26.722454    6404 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:26.722848    6404 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:26.722856    6404 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:26.722859    6404 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (76.940458ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:28.048100    6406 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:28.048286    6406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:28.048291    6406 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:28.048294    6406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:28.048473    6406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:28.048629    6406 out.go:298] Setting JSON to false
	I0624 03:28:28.048642    6406 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:28.048690    6406 notify.go:220] Checking for updates...
	I0624 03:28:28.048907    6406 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:28.048914    6406 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:28.049226    6406 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:28.049231    6406 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:28.049234    6406 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (75.167667ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:31.250408    6408 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:31.250617    6408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:31.250621    6408 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:31.250624    6408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:31.250803    6408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:31.250972    6408 out.go:298] Setting JSON to false
	I0624 03:28:31.250988    6408 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:31.251023    6408 notify.go:220] Checking for updates...
	I0624 03:28:31.251240    6408 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:31.251247    6408 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:31.251526    6408 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:31.251531    6408 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:31.251534    6408 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (74.233041ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:34.696318    6418 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:34.696541    6418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:34.696545    6418 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:34.696549    6418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:34.696723    6418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:34.696891    6418 out.go:298] Setting JSON to false
	I0624 03:28:34.696904    6418 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:34.696949    6418 notify.go:220] Checking for updates...
	I0624 03:28:34.697155    6418 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:34.697167    6418 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:34.697458    6418 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:34.697462    6418 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:34.697465    6418 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (76.469625ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:38.647331    6420 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:38.647539    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:38.647544    6420 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:38.647547    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:38.647723    6420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:38.647875    6420 out.go:298] Setting JSON to false
	I0624 03:28:38.647888    6420 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:38.647934    6420 notify.go:220] Checking for updates...
	I0624 03:28:38.648132    6420 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:38.648138    6420 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:38.648434    6420 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:38.648438    6420 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:38.648441    6420 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (72.6705ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:28:54.290263    6430 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:28:54.290479    6430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:54.290484    6430 out.go:304] Setting ErrFile to fd 2...
	I0624 03:28:54.290487    6430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:28:54.290670    6430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:28:54.290822    6430 out.go:298] Setting JSON to false
	I0624 03:28:54.290836    6430 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:28:54.290880    6430 notify.go:220] Checking for updates...
	I0624 03:28:54.291078    6430 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:28:54.291084    6430 status.go:255] checking status of multinode-913000 ...
	I0624 03:28:54.291353    6430 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:28:54.291358    6430 status.go:343] host is not running, skipping remaining checks
	I0624 03:28:54.291361    6430 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr: exit status 7 (72.123416ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:18.858913    6438 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:18.859114    6438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:18.859118    6438 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:18.859121    6438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:18.859297    6438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:18.859448    6438 out.go:298] Setting JSON to false
	I0624 03:29:18.859462    6438 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:29:18.859513    6438 notify.go:220] Checking for updates...
	I0624 03:29:18.859723    6438 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:18.859729    6438 status.go:255] checking status of multinode-913000 ...
	I0624 03:29:18.860000    6438 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:29:18.860005    6438 status.go:343] host is not running, skipping remaining checks
	I0624 03:29:18.860008    6438 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-913000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (33.517667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-913000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-913000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-913000: (3.584630167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-913000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-913000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224711833s)

                                                
                                                
-- stdout --
	* [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	* Restarting existing qemu2 VM for "multinode-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:22.574308    6464 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:22.574481    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:22.574485    6464 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:22.574488    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:22.574663    6464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:22.575922    6464 out.go:298] Setting JSON to false
	I0624 03:29:22.594974    6464 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5332,"bootTime":1719219630,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:29:22.595047    6464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:29:22.599998    6464 out.go:177] * [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:29:22.607910    6464 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:29:22.607940    6464 notify.go:220] Checking for updates...
	I0624 03:29:22.615905    6464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:29:22.618905    6464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:29:22.621890    6464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:29:22.624888    6464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:29:22.627799    6464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:29:22.631071    6464 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:22.631130    6464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:29:22.633848    6464 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:29:22.640910    6464 start.go:297] selected driver: qemu2
	I0624 03:29:22.640916    6464 start.go:901] validating driver "qemu2" against &{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:29:22.640976    6464 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:29:22.643256    6464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:29:22.643301    6464 cni.go:84] Creating CNI manager for ""
	I0624 03:29:22.643308    6464 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 03:29:22.643372    6464 start.go:340] cluster config:
	{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:29:22.648121    6464 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:22.656830    6464 out.go:177] * Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	I0624 03:29:22.660859    6464 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:29:22.660877    6464 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:29:22.660886    6464 cache.go:56] Caching tarball of preloaded images
	I0624 03:29:22.660953    6464 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:29:22.660958    6464 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:29:22.661018    6464 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/multinode-913000/config.json ...
	I0624 03:29:22.661433    6464 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:29:22.661472    6464 start.go:364] duration metric: took 33.5µs to acquireMachinesLock for "multinode-913000"
	I0624 03:29:22.661482    6464 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:29:22.661492    6464 fix.go:54] fixHost starting: 
	I0624 03:29:22.661611    6464 fix.go:112] recreateIfNeeded on multinode-913000: state=Stopped err=<nil>
	W0624 03:29:22.661620    6464 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:29:22.669893    6464 out.go:177] * Restarting existing qemu2 VM for "multinode-913000" ...
	I0624 03:29:22.673752    6464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:31:d7:ec:45:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:29:22.675888    6464 main.go:141] libmachine: STDOUT: 
	I0624 03:29:22.675909    6464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:29:22.675939    6464 fix.go:56] duration metric: took 14.447792ms for fixHost
	I0624 03:29:22.675945    6464 start.go:83] releasing machines lock for "multinode-913000", held for 14.467833ms
	W0624 03:29:22.675951    6464 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:29:22.675999    6464 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:29:22.676004    6464 start.go:728] Will try again in 5 seconds ...
	I0624 03:29:27.678140    6464 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:29:27.678525    6464 start.go:364] duration metric: took 291.166µs to acquireMachinesLock for "multinode-913000"
	I0624 03:29:27.678639    6464 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:29:27.678664    6464 fix.go:54] fixHost starting: 
	I0624 03:29:27.679378    6464 fix.go:112] recreateIfNeeded on multinode-913000: state=Stopped err=<nil>
	W0624 03:29:27.679408    6464 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:29:27.683869    6464 out.go:177] * Restarting existing qemu2 VM for "multinode-913000" ...
	I0624 03:29:27.689490    6464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:31:d7:ec:45:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:29:27.698436    6464 main.go:141] libmachine: STDOUT: 
	I0624 03:29:27.698517    6464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:29:27.698597    6464 fix.go:56] duration metric: took 19.937ms for fixHost
	I0624 03:29:27.698625    6464 start.go:83] releasing machines lock for "multinode-913000", held for 20.075833ms
	W0624 03:29:27.698913    6464 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:29:27.705875    6464 out.go:177] 
	W0624 03:29:27.709924    6464 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:29:27.709958    6464 out.go:239] * 
	* 
	W0624 03:29:27.712747    6464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:29:27.720816    6464 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-913000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-913000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (32.477083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 node delete m03: exit status 83 (42.583292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-913000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-913000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-913000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr: exit status 7 (30.118792ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:27.908099    6480 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:27.908266    6480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:27.908269    6480 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:27.908271    6480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:27.908416    6480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:27.908535    6480 out.go:298] Setting JSON to false
	I0624 03:29:27.908545    6480 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:29:27.908602    6480 notify.go:220] Checking for updates...
	I0624 03:29:27.908725    6480 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:27.908730    6480 status.go:255] checking status of multinode-913000 ...
	I0624 03:29:27.908945    6480 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:29:27.908948    6480 status.go:343] host is not running, skipping remaining checks
	I0624 03:29:27.908951    6480 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (29.726792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-913000 stop: (3.786923125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status: exit status 7 (69.033167ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr: exit status 7 (33.246417ms)

                                                
                                                
-- stdout --
	multinode-913000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:31.827592    6506 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:31.827747    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:31.827750    6506 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:31.827752    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:31.827898    6506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:31.828021    6506 out.go:298] Setting JSON to false
	I0624 03:29:31.828031    6506 mustload.go:65] Loading cluster: multinode-913000
	I0624 03:29:31.828089    6506 notify.go:220] Checking for updates...
	I0624 03:29:31.828259    6506 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:31.828264    6506 status.go:255] checking status of multinode-913000 ...
	I0624 03:29:31.828466    6506 status.go:330] multinode-913000 host status = "Stopped" (err=<nil>)
	I0624 03:29:31.828469    6506 status.go:343] host is not running, skipping remaining checks
	I0624 03:29:31.828472    6506 status.go:257] multinode-913000 status: &{Name:multinode-913000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr": multinode-913000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-913000 status --alsologtostderr": multinode-913000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (30.51825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-913000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-913000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181116416s)

                                                
                                                
-- stdout --
	* [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	* Restarting existing qemu2 VM for "multinode-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:31.887285    6510 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:31.887417    6510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:31.887420    6510 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:31.887423    6510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:31.887533    6510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:31.888513    6510 out.go:298] Setting JSON to false
	I0624 03:29:31.905208    6510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5341,"bootTime":1719219630,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:29:31.905277    6510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:29:31.909907    6510 out.go:177] * [multinode-913000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:29:31.916889    6510 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:29:31.916957    6510 notify.go:220] Checking for updates...
	I0624 03:29:31.924832    6510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:29:31.928829    6510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:29:31.932900    6510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:29:31.935903    6510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:29:31.938812    6510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:29:31.942200    6510 config.go:182] Loaded profile config "multinode-913000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:31.942466    6510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:29:31.945919    6510 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:29:31.952897    6510 start.go:297] selected driver: qemu2
	I0624 03:29:31.952904    6510 start.go:901] validating driver "qemu2" against &{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:29:31.952960    6510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:29:31.955093    6510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:29:31.955131    6510 cni.go:84] Creating CNI manager for ""
	I0624 03:29:31.955137    6510 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 03:29:31.955184    6510 start.go:340] cluster config:
	{Name:multinode-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-913000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:29:31.959653    6510 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:31.967864    6510 out.go:177] * Starting "multinode-913000" primary control-plane node in "multinode-913000" cluster
	I0624 03:29:31.971864    6510 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:29:31.971882    6510 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:29:31.971890    6510 cache.go:56] Caching tarball of preloaded images
	I0624 03:29:31.971958    6510 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:29:31.971963    6510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:29:31.972024    6510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/multinode-913000/config.json ...
	I0624 03:29:31.972417    6510 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:29:31.972445    6510 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "multinode-913000"
	I0624 03:29:31.972453    6510 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:29:31.972461    6510 fix.go:54] fixHost starting: 
	I0624 03:29:31.972574    6510 fix.go:112] recreateIfNeeded on multinode-913000: state=Stopped err=<nil>
	W0624 03:29:31.972582    6510 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:29:31.979765    6510 out.go:177] * Restarting existing qemu2 VM for "multinode-913000" ...
	I0624 03:29:31.983855    6510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:31:d7:ec:45:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:29:31.985770    6510 main.go:141] libmachine: STDOUT: 
	I0624 03:29:31.985785    6510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:29:31.985813    6510 fix.go:56] duration metric: took 13.353584ms for fixHost
	I0624 03:29:31.985819    6510 start.go:83] releasing machines lock for "multinode-913000", held for 13.370041ms
	W0624 03:29:31.985824    6510 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:29:31.985857    6510 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:29:31.985862    6510 start.go:728] Will try again in 5 seconds ...
	I0624 03:29:36.988008    6510 start.go:360] acquireMachinesLock for multinode-913000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:29:36.988424    6510 start.go:364] duration metric: took 314.25µs to acquireMachinesLock for "multinode-913000"
	I0624 03:29:36.988541    6510 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:29:36.988561    6510 fix.go:54] fixHost starting: 
	I0624 03:29:36.989441    6510 fix.go:112] recreateIfNeeded on multinode-913000: state=Stopped err=<nil>
	W0624 03:29:36.989469    6510 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:29:36.993947    6510 out.go:177] * Restarting existing qemu2 VM for "multinode-913000" ...
	I0624 03:29:36.997124    6510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:31:d7:ec:45:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/multinode-913000/disk.qcow2
	I0624 03:29:37.006158    6510 main.go:141] libmachine: STDOUT: 
	I0624 03:29:37.006219    6510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:29:37.006294    6510 fix.go:56] duration metric: took 17.732541ms for fixHost
	I0624 03:29:37.006350    6510 start.go:83] releasing machines lock for "multinode-913000", held for 17.906333ms
	W0624 03:29:37.006553    6510 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:29:37.014917    6510 out.go:177] 
	W0624 03:29:37.017989    6510 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:29:37.018039    6510 out.go:239] * 
	* 
	W0624 03:29:37.020902    6510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:29:37.027875    6510 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-913000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (69.708584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-913000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-913000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-913000-m01 --driver=qemu2 : exit status 80 (9.910458041s)

                                                
                                                
-- stdout --
	* [multinode-913000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-913000-m01" primary control-plane node in "multinode-913000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-913000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-913000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-913000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-913000-m02 --driver=qemu2 : exit status 80 (10.153197708s)

                                                
                                                
-- stdout --
	* [multinode-913000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-913000-m02" primary control-plane node in "multinode-913000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-913000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-913000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-913000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-913000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-913000: exit status 83 (83.369959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-913000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-913000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-913000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-913000 -n multinode-913000: exit status 7 (30.640292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-913000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.32s)

                                                
                                    
x
+
TestPreload (10.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.949257166s)

                                                
                                                
-- stdout --
	* [test-preload-948000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-948000" primary control-plane node in "test-preload-948000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-948000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:29:57.598837    6572 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:29:57.599038    6572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:57.599042    6572 out.go:304] Setting ErrFile to fd 2...
	I0624 03:29:57.599047    6572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:29:57.599183    6572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:29:57.600192    6572 out.go:298] Setting JSON to false
	I0624 03:29:57.616088    6572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5367,"bootTime":1719219630,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:29:57.616149    6572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:29:57.623070    6572 out.go:177] * [test-preload-948000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:29:57.630053    6572 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:29:57.630114    6572 notify.go:220] Checking for updates...
	I0624 03:29:57.637970    6572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:29:57.641020    6572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:29:57.644831    6572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:29:57.647994    6572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:29:57.651028    6572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:29:57.654339    6572 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:29:57.654396    6572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:29:57.657984    6572 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:29:57.665023    6572 start.go:297] selected driver: qemu2
	I0624 03:29:57.665029    6572 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:29:57.665039    6572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:29:57.667306    6572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:29:57.670946    6572 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:29:57.674093    6572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:29:57.674134    6572 cni.go:84] Creating CNI manager for ""
	I0624 03:29:57.674142    6572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:29:57.674152    6572 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:29:57.674189    6572 start.go:340] cluster config:
	{Name:test-preload-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:29:57.678481    6572 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.686973    6572 out.go:177] * Starting "test-preload-948000" primary control-plane node in "test-preload-948000" cluster
	I0624 03:29:57.691014    6572 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0624 03:29:57.691090    6572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/test-preload-948000/config.json ...
	I0624 03:29:57.691111    6572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/test-preload-948000/config.json: {Name:mk1d3fd11cc6f4985668d2dd85a42f2ace6e084a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:29:57.691131    6572 cache.go:107] acquiring lock: {Name:mked59fb8aa75320154cc5604c97a69c9d3437cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691137    6572 cache.go:107] acquiring lock: {Name:mke7254556faa3ae3426666bd0501219ed9b70c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691157    6572 cache.go:107] acquiring lock: {Name:mk278dd2a85b6834936dc986741083f66cdcd6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691333    6572 cache.go:107] acquiring lock: {Name:mkbde570057cdf4bd6ce84db8c4e99e92eea18c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691350    6572 cache.go:107] acquiring lock: {Name:mkf92380bfc87faa808d261fe6ed4577c1dd3871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691395    6572 cache.go:107] acquiring lock: {Name:mk206613de7de9ed044ec410ba0ad43daaa13e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691427    6572 cache.go:107] acquiring lock: {Name:mkce299ce662ad5c59be35de461eb953561c362a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691459    6572 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0624 03:29:57.691462    6572 cache.go:107] acquiring lock: {Name:mkc8f901a515e0907150ab2aa800429667a09815 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:29:57.691472    6572 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0624 03:29:57.691494    6572 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0624 03:29:57.691458    6572 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0624 03:29:57.691510    6572 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:29:57.691553    6572 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0624 03:29:57.691449    6572 start.go:360] acquireMachinesLock for test-preload-948000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:29:57.691681    6572 start.go:364] duration metric: took 53.875µs to acquireMachinesLock for "test-preload-948000"
	I0624 03:29:57.691687    6572 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:29:57.691699    6572 start.go:93] Provisioning new machine with config: &{Name:test-preload-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:29:57.691741    6572 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:29:57.691760    6572 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:29:57.699977    6572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:29:57.705449    6572 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0624 03:29:57.705772    6572 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0624 03:29:57.706821    6572 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:29:57.710541    6572 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:29:57.710574    6572 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0624 03:29:57.710613    6572 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0624 03:29:57.710661    6572 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0624 03:29:57.710719    6572 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:29:57.717352    6572 start.go:159] libmachine.API.Create for "test-preload-948000" (driver="qemu2")
	I0624 03:29:57.717376    6572 client.go:168] LocalClient.Create starting
	I0624 03:29:57.717462    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:29:57.717493    6572 main.go:141] libmachine: Decoding PEM data...
	I0624 03:29:57.717505    6572 main.go:141] libmachine: Parsing certificate...
	I0624 03:29:57.717553    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:29:57.717577    6572 main.go:141] libmachine: Decoding PEM data...
	I0624 03:29:57.717584    6572 main.go:141] libmachine: Parsing certificate...
	I0624 03:29:57.718001    6572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:29:57.890932    6572 main.go:141] libmachine: Creating SSH key...
	I0624 03:29:58.020358    6572 main.go:141] libmachine: Creating Disk image...
	I0624 03:29:58.020375    6572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:29:58.020599    6572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:29:58.033267    6572 main.go:141] libmachine: STDOUT: 
	I0624 03:29:58.033286    6572 main.go:141] libmachine: STDERR: 
	I0624 03:29:58.033343    6572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2 +20000M
	I0624 03:29:58.044345    6572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:29:58.044360    6572 main.go:141] libmachine: STDERR: 
	I0624 03:29:58.044371    6572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:29:58.044376    6572 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:29:58.044418    6572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:8c:0d:1c:d7:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:29:58.046178    6572 main.go:141] libmachine: STDOUT: 
	I0624 03:29:58.046196    6572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:29:58.046215    6572 client.go:171] duration metric: took 328.835417ms to LocalClient.Create
	W0624 03:29:58.686897    6572 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0624 03:29:58.686997    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0624 03:29:58.757477    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0624 03:29:58.803657    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0624 03:29:58.813121    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0624 03:29:58.820794    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0624 03:29:58.953699    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0624 03:29:58.953748    6572 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.2626335s
	I0624 03:29:58.953782    6572 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0624 03:29:58.960604    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0624 03:29:58.960633    6572 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.269344709s
	I0624 03:29:58.960646    6572 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0624 03:29:59.010737    6572 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0624 03:29:59.010737    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0624 03:29:59.010851    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0624 03:29:59.017903    6572 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0624 03:30:00.046535    6572 start.go:128] duration metric: took 2.35478075s to createHost
	I0624 03:30:00.046597    6572 start.go:83] releasing machines lock for "test-preload-948000", held for 2.354922375s
	W0624 03:30:00.046656    6572 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:00.060919    6572 out.go:177] * Deleting "test-preload-948000" in qemu2 ...
	W0624 03:30:00.091005    6572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:00.091037    6572 start.go:728] Will try again in 5 seconds ...
	I0624 03:30:01.038380    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0624 03:30:01.038409    6572 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.347101834s
	I0624 03:30:01.038427    6572 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0624 03:30:01.066305    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0624 03:30:01.066346    6572 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.375007833s
	I0624 03:30:01.066364    6572 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0624 03:30:03.213910    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0624 03:30:03.213957    6572 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.52288525s
	I0624 03:30:03.213990    6572 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0624 03:30:03.300271    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0624 03:30:03.300317    6572 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.608898417s
	I0624 03:30:03.300337    6572 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0624 03:30:03.312257    6572 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0624 03:30:03.312289    6572 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.621212583s
	I0624 03:30:03.312309    6572 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0624 03:30:05.091441    6572 start.go:360] acquireMachinesLock for test-preload-948000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:05.091861    6572 start.go:364] duration metric: took 341.666µs to acquireMachinesLock for "test-preload-948000"
	I0624 03:30:05.091963    6572 start.go:93] Provisioning new machine with config: &{Name:test-preload-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:30:05.092211    6572 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:30:05.100881    6572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:30:05.151618    6572 start.go:159] libmachine.API.Create for "test-preload-948000" (driver="qemu2")
	I0624 03:30:05.151700    6572 client.go:168] LocalClient.Create starting
	I0624 03:30:05.151980    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:30:05.152045    6572 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:05.152065    6572 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:05.152137    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:30:05.152181    6572 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:05.152195    6572 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:05.152702    6572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:30:05.316614    6572 main.go:141] libmachine: Creating SSH key...
	I0624 03:30:05.445864    6572 main.go:141] libmachine: Creating Disk image...
	I0624 03:30:05.445870    6572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:30:05.446098    6572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:30:05.459098    6572 main.go:141] libmachine: STDOUT: 
	I0624 03:30:05.459118    6572 main.go:141] libmachine: STDERR: 
	I0624 03:30:05.459216    6572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2 +20000M
	I0624 03:30:05.470547    6572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:30:05.470567    6572 main.go:141] libmachine: STDERR: 
	I0624 03:30:05.470579    6572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:30:05.470587    6572 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:30:05.470627    6572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:48:7c:28:dd:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/test-preload-948000/disk.qcow2
	I0624 03:30:05.472535    6572 main.go:141] libmachine: STDOUT: 
	I0624 03:30:05.472553    6572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:05.472569    6572 client.go:171] duration metric: took 320.86725ms to LocalClient.Create
	I0624 03:30:07.473682    6572 start.go:128] duration metric: took 2.381431s to createHost
	I0624 03:30:07.473759    6572 start.go:83] releasing machines lock for "test-preload-948000", held for 2.381894291s
	W0624 03:30:07.474031    6572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:07.490609    6572 out.go:177] 
	W0624 03:30:07.493650    6572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:30:07.493725    6572 out.go:239] * 
	* 
	W0624 03:30:07.496364    6572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:30:07.504556    6572 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-06-24 03:30:07.522316 -0700 PDT m=+681.991548418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-948000 -n test-preload-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-948000 -n test-preload-948000: exit status 7 (68.010959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-948000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-948000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-948000
--- FAIL: TestPreload (10.12s)

                                                
                                    
x
+
TestScheduledStopUnix (10.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-300000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-300000 --memory=2048 --driver=qemu2 : exit status 80 (10.062479459s)

                                                
                                                
-- stdout --
	* [scheduled-stop-300000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-300000" primary control-plane node in "scheduled-stop-300000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-300000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-300000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-300000" primary control-plane node in "scheduled-stop-300000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-300000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-06-24 03:30:17.752522 -0700 PDT m=+692.221844168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-300000 -n scheduled-stop-300000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-300000 -n scheduled-stop-300000: exit status 7 (68.042917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-300000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-300000
--- FAIL: TestScheduledStopUnix (10.24s)

                                                
                                    
x
+
TestSkaffold (12.16s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2981070996 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-135000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-135000 --memory=2600 --driver=qemu2 : exit status 80 (9.767882583s)

                                                
                                                
-- stdout --
	* [skaffold-135000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-135000" primary control-plane node in "skaffold-135000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-135000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-135000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-135000" primary control-plane node in "skaffold-135000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-135000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-06-24 03:30:29.912404 -0700 PDT m=+704.381832501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-135000 -n skaffold-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-135000 -n skaffold-135000: exit status 7 (62.306042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-135000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-135000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-135000
--- FAIL: TestSkaffold (12.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (615.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.800754915 start -p running-upgrade-398000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.800754915 start -p running-upgrade-398000 --memory=2200 --vm-driver=qemu2 : (1m5.787461375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-398000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-398000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.19171675s)

                                                
                                                
-- stdout --
	* [running-upgrade-398000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-398000" primary control-plane node in "running-upgrade-398000" cluster
	* Updating the running qemu2 "running-upgrade-398000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:32:14.091013    6932 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:32:14.091178    6932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:32:14.091181    6932 out.go:304] Setting ErrFile to fd 2...
	I0624 03:32:14.091184    6932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:32:14.091323    6932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:32:14.092496    6932 out.go:298] Setting JSON to false
	I0624 03:32:14.108906    6932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5504,"bootTime":1719219630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:32:14.108964    6932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:32:14.114595    6932 out.go:177] * [running-upgrade-398000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:32:14.122598    6932 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:32:14.122689    6932 notify.go:220] Checking for updates...
	I0624 03:32:14.130568    6932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:14.134647    6932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:32:14.137616    6932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:32:14.140613    6932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:32:14.143606    6932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:32:14.146858    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:14.149570    6932 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0624 03:32:14.152562    6932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:32:14.155592    6932 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:32:14.162581    6932 start.go:297] selected driver: qemu2
	I0624 03:32:14.162587    6932 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:14.162630    6932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:32:14.164724    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:32:14.164742    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:14.164770    6932 start.go:340] cluster config:
	{Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:14.164817    6932 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:32:14.172585    6932 out.go:177] * Starting "running-upgrade-398000" primary control-plane node in "running-upgrade-398000" cluster
	I0624 03:32:14.176566    6932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:14.176580    6932 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0624 03:32:14.176585    6932 cache.go:56] Caching tarball of preloaded images
	I0624 03:32:14.176634    6932 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:32:14.176638    6932 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0624 03:32:14.176694    6932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/config.json ...
	I0624 03:32:14.177091    6932 start.go:360] acquireMachinesLock for running-upgrade-398000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:32:15.574890    6932 start.go:364] duration metric: took 1.397803083s to acquireMachinesLock for "running-upgrade-398000"
	I0624 03:32:15.574910    6932 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:32:15.574927    6932 fix.go:54] fixHost starting: 
	I0624 03:32:15.575717    6932 fix.go:112] recreateIfNeeded on running-upgrade-398000: state=Running err=<nil>
	W0624 03:32:15.575727    6932 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:32:15.582827    6932 out.go:177] * Updating the running qemu2 "running-upgrade-398000" VM ...
	I0624 03:32:15.586912    6932 machine.go:94] provisionDockerMachine start ...
	I0624 03:32:15.586959    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.587072    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.587077    6932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:32:15.643689    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-398000
	
	I0624 03:32:15.643705    6932 buildroot.go:166] provisioning hostname "running-upgrade-398000"
	I0624 03:32:15.643747    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.643862    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.643867    6932 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-398000 && echo "running-upgrade-398000" | sudo tee /etc/hostname
	I0624 03:32:15.707421    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-398000
	
	I0624 03:32:15.707475    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.707604    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.707611    6932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-398000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-398000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-398000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:32:15.764120    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:15.764135    6932 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19124-4612/.minikube CaCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19124-4612/.minikube}
	I0624 03:32:15.764148    6932 buildroot.go:174] setting up certificates
	I0624 03:32:15.764152    6932 provision.go:84] configureAuth start
	I0624 03:32:15.764156    6932 provision.go:143] copyHostCerts
	I0624 03:32:15.764224    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem, removing ...
	I0624 03:32:15.764233    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem
	I0624 03:32:15.764346    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem (1082 bytes)
	I0624 03:32:15.764538    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem, removing ...
	I0624 03:32:15.764542    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem
	I0624 03:32:15.764582    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem (1123 bytes)
	I0624 03:32:15.764680    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem, removing ...
	I0624 03:32:15.764684    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem
	I0624 03:32:15.764718    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem (1679 bytes)
	I0624 03:32:15.764810    6932 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-398000 san=[127.0.0.1 localhost minikube running-upgrade-398000]
	I0624 03:32:15.842744    6932 provision.go:177] copyRemoteCerts
	I0624 03:32:15.842774    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:32:15.842783    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:15.873285    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0624 03:32:15.880441    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0624 03:32:15.887829    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 03:32:15.896056    6932 provision.go:87] duration metric: took 131.892875ms to configureAuth
	I0624 03:32:15.896069    6932 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:32:15.896186    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:15.896224    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.896315    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.896320    6932 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:32:15.956553    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:32:15.956562    6932 buildroot.go:70] root file system type: tmpfs
	I0624 03:32:15.956619    6932 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:32:15.956665    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.956781    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.956816    6932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:32:16.018704    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:32:16.018761    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:16.018889    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:16.018898    6932 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:32:16.079785    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:16.079796    6932 machine.go:97] duration metric: took 492.882333ms to provisionDockerMachine
	I0624 03:32:16.079802    6932 start.go:293] postStartSetup for "running-upgrade-398000" (driver="qemu2")
	I0624 03:32:16.079808    6932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:32:16.079906    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:32:16.079917    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:16.113893    6932 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:32:16.115378    6932 info.go:137] Remote host: Buildroot 2021.02.12
	I0624 03:32:16.115386    6932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/addons for local assets ...
	I0624 03:32:16.115475    6932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/files for local assets ...
	I0624 03:32:16.115567    6932 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem -> 51362.pem in /etc/ssl/certs
	I0624 03:32:16.115662    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 03:32:16.119178    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:16.126317    6932 start.go:296] duration metric: took 46.50875ms for postStartSetup
	I0624 03:32:16.126332    6932 fix.go:56] duration metric: took 551.423416ms for fixHost
	I0624 03:32:16.126373    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:16.126498    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:16.126502    6932 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0624 03:32:16.184479    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225136.521207639
	
	I0624 03:32:16.184489    6932 fix.go:216] guest clock: 1719225136.521207639
	I0624 03:32:16.184493    6932 fix.go:229] Guest: 2024-06-24 03:32:16.521207639 -0700 PDT Remote: 2024-06-24 03:32:16.126334 -0700 PDT m=+2.054686460 (delta=394.873639ms)
	I0624 03:32:16.184505    6932 fix.go:200] guest clock delta is within tolerance: 394.873639ms
	I0624 03:32:16.184507    6932 start.go:83] releasing machines lock for "running-upgrade-398000", held for 609.612708ms
	I0624 03:32:16.184572    6932 ssh_runner.go:195] Run: cat /version.json
	I0624 03:32:16.184579    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:16.184597    6932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:32:16.184624    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	W0624 03:32:16.185224    6932 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51144: connect: connection refused
	I0624 03:32:16.185242    6932 retry.go:31] will retry after 182.040143ms: dial tcp [::1]:51144: connect: connection refused
	W0624 03:32:16.400640    6932 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0624 03:32:16.400714    6932 ssh_runner.go:195] Run: systemctl --version
	I0624 03:32:16.402584    6932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 03:32:16.404128    6932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:32:16.404153    6932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0624 03:32:16.407252    6932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0624 03:32:16.411484    6932 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 03:32:16.411491    6932 start.go:494] detecting cgroup driver to use...
	I0624 03:32:16.411566    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:16.416750    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0624 03:32:16.420238    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:32:16.423327    6932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.423349    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:32:16.426457    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:16.429164    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:32:16.432328    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:16.435372    6932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:32:16.438365    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:32:16.441262    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:32:16.444572    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:32:16.447648    6932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:32:16.450430    6932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:32:16.452931    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.553019    6932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:32:16.559321    6932 start.go:494] detecting cgroup driver to use...
	I0624 03:32:16.559384    6932 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:32:16.567665    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:16.572802    6932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:32:16.584041    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:16.589334    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:16.594185    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:16.599444    6932 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:32:16.600660    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:32:16.603770    6932 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:32:16.608804    6932 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:32:16.698957    6932 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:32:16.787947    6932 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.788017    6932 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:32:16.793306    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.877822    6932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:30.252657    6932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.374935375s)
	I0624 03:32:30.252716    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 03:32:30.257966    6932 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0624 03:32:30.267087    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:30.271838    6932 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 03:32:30.354296    6932 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 03:32:30.438590    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:30.521192    6932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 03:32:30.527624    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:30.532195    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:30.621373    6932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 03:32:30.660382    6932 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 03:32:30.660440    6932 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 03:32:30.663391    6932 start.go:562] Will wait 60s for crictl version
	I0624 03:32:30.663443    6932 ssh_runner.go:195] Run: which crictl
	I0624 03:32:30.664767    6932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 03:32:30.676929    6932 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0624 03:32:30.676994    6932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:30.689998    6932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:30.707997    6932 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0624 03:32:30.708064    6932 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0624 03:32:30.709437    6932 kubeadm.go:877] updating cluster {Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0624 03:32:30.709487    6932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:30.709528    6932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:30.720324    6932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:30.720332    6932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:30.720374    6932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:30.723238    6932 ssh_runner.go:195] Run: which lz4
	I0624 03:32:30.724616    6932 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0624 03:32:30.725739    6932 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 03:32:30.725749    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0624 03:32:31.526684    6932 docker.go:649] duration metric: took 802.104333ms to copy over tarball
	I0624 03:32:31.526757    6932 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 03:32:32.840437    6932 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.313678125s)
	I0624 03:32:32.840450    6932 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 03:32:32.856097    6932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:32.859208    6932 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0624 03:32:32.864784    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:32.942133    6932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:34.170457    6932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.228314s)
	I0624 03:32:34.170576    6932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:34.183251    6932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:34.183277    6932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:34.183286    6932 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0624 03:32:34.189682    6932 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:34.189684    6932 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:34.189753    6932 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:34.189839    6932 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0624 03:32:34.189909    6932 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:34.189926    6932 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:34.189930    6932 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:34.190112    6932 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:34.199623    6932 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:34.199705    6932 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0624 03:32:34.199913    6932 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:34.199915    6932 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:34.200199    6932 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:34.200198    6932 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:34.200336    6932 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:34.200601    6932 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0624 03:32:35.001606    6932 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:35.002030    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.030712    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.032232    6932 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0624 03:32:35.032269    6932 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.032320    6932 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.050702    6932 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0624 03:32:35.050729    6932 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.050811    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.062883    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0624 03:32:35.063009    6932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:35.071733    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0624 03:32:35.071749    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0624 03:32:35.071766    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0624 03:32:35.073658    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.084610    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.101011    6932 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0624 03:32:35.101034    6932 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.101088    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.112185    6932 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:35.112198    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0624 03:32:35.116451    6932 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0624 03:32:35.116471    6932 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.116523    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.118154    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0624 03:32:35.125257    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0624 03:32:35.236886    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0624 03:32:35.243876    6932 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:35.243914    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.244088    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.387328    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0624 03:32:35.387373    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0624 03:32:35.387405    6932 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0624 03:32:35.387423    6932 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0624 03:32:35.387435    6932 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0624 03:32:35.387444    6932 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:35.387471    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0624 03:32:35.387471    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:35.387499    6932 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0624 03:32:35.387505    6932 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0624 03:32:35.387512    6932 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.387512    6932 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.387537    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.387551    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.410630    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0624 03:32:35.410641    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0624 03:32:35.410763    6932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0624 03:32:35.423483    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0624 03:32:35.423542    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0624 03:32:35.423551    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0624 03:32:35.423561    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0624 03:32:35.423608    6932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:35.425287    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0624 03:32:35.425297    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0624 03:32:35.436210    6932 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0624 03:32:35.436226    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0624 03:32:35.487806    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0624 03:32:35.487832    6932 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:35.487843    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0624 03:32:35.523035    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0624 03:32:35.523072    6932 cache_images.go:92] duration metric: took 1.339790833s to LoadCachedImages
	W0624 03:32:35.523116    6932 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0624 03:32:35.523124    6932 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0624 03:32:35.523180    6932 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-398000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 03:32:35.523242    6932 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 03:32:35.536789    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:32:35.536805    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:35.536813    6932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 03:32:35.536822    6932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-398000 NodeName:running-upgrade-398000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 03:32:35.536890    6932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-398000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 03:32:35.536948    6932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0624 03:32:35.539922    6932 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 03:32:35.539962    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 03:32:35.542777    6932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0624 03:32:35.547777    6932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 03:32:35.552347    6932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0624 03:32:35.557363    6932 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0624 03:32:35.558472    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:35.642821    6932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:32:35.647650    6932 certs.go:68] Setting up /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000 for IP: 10.0.2.15
	I0624 03:32:35.647655    6932 certs.go:194] generating shared ca certs ...
	I0624 03:32:35.647662    6932 certs.go:226] acquiring lock for ca certs: {Name:mk1070bf28491713fa565ef6662c76d5a9260883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.647824    6932 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key
	I0624 03:32:35.647880    6932 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key
	I0624 03:32:35.647885    6932 certs.go:256] generating profile certs ...
	I0624 03:32:35.647957    6932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key
	I0624 03:32:35.647976    6932 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615
	I0624 03:32:35.647990    6932 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0624 03:32:35.748513    6932 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 ...
	I0624 03:32:35.748528    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615: {Name:mk7cb03054a669937a45b7bb1f7d8fe1bc07de87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.748813    6932 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615 ...
	I0624 03:32:35.748817    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615: {Name:mk9c29e898cba469e6a986fd7743e831a225721e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.748957    6932 certs.go:381] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt
	I0624 03:32:35.749092    6932 certs.go:385] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key
	I0624 03:32:35.749243    6932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.key
	I0624 03:32:35.749373    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem (1338 bytes)
	W0624 03:32:35.749403    6932 certs.go:480] ignoring /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136_empty.pem, impossibly tiny 0 bytes
	I0624 03:32:35.749409    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem (1675 bytes)
	I0624 03:32:35.749440    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem (1082 bytes)
	I0624 03:32:35.749467    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem (1123 bytes)
	I0624 03:32:35.749493    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem (1679 bytes)
	I0624 03:32:35.749545    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:35.749887    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 03:32:35.757296    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 03:32:35.764166    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 03:32:35.770524    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 03:32:35.777673    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0624 03:32:35.784846    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0624 03:32:35.791724    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 03:32:35.798645    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 03:32:35.805912    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /usr/share/ca-certificates/51362.pem (1708 bytes)
	I0624 03:32:35.813133    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 03:32:35.819781    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem --> /usr/share/ca-certificates/5136.pem (1338 bytes)
	I0624 03:32:35.826807    6932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 03:32:35.831834    6932 ssh_runner.go:195] Run: openssl version
	I0624 03:32:35.833782    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51362.pem && ln -fs /usr/share/ca-certificates/51362.pem /etc/ssl/certs/51362.pem"
	I0624 03:32:35.836895    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.838160    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:19 /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.838183    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.840095    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51362.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 03:32:35.843009    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 03:32:35.846593    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.848535    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.848559    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.850526    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 03:32:35.853909    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5136.pem && ln -fs /usr/share/ca-certificates/5136.pem /etc/ssl/certs/5136.pem"
	I0624 03:32:35.857268    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.858620    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:19 /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.858639    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.860632    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5136.pem /etc/ssl/certs/51391683.0"
	I0624 03:32:35.863447    6932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 03:32:35.865067    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 03:32:35.866764    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 03:32:35.868825    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 03:32:35.870579    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 03:32:35.872785    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 03:32:35.874470    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 03:32:35.876093    6932 kubeadm.go:391] StartCluster: {Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:35.876159    6932 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:35.886374    6932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0624 03:32:35.890510    6932 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 03:32:35.890518    6932 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 03:32:35.890525    6932 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 03:32:35.890547    6932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 03:32:35.893390    6932 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:35.893692    6932 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-398000" does not appear in /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:35.893793    6932 kubeconfig.go:62] /Users/jenkins/minikube-integration/19124-4612/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-398000" cluster setting kubeconfig missing "running-upgrade-398000" context setting]
	I0624 03:32:35.894023    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.894473    6932 kapi.go:59] client config for running-upgrade-398000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10655ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:32:35.894791    6932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 03:32:35.897605    6932 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-398000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0624 03:32:35.897611    6932 kubeadm.go:1154] stopping kube-system containers ...
	I0624 03:32:35.897649    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:35.908738    6932 docker.go:483] Stopping containers: [1fe49719b853 97bd8b01ebb9 46cc05d81f82 aae9a727b1ef 5e68f03fc08d ff24041fb2ac f0f772cfc12e 1ebbfbc68569 318a5cc223b5 b8559e67098a 802c3a1e9cad b62fd1734dff fc34224f55d0 dac2a23ff62a 1300b36c45bd 091967d291c6]
	I0624 03:32:35.908796    6932 ssh_runner.go:195] Run: docker stop 1fe49719b853 97bd8b01ebb9 46cc05d81f82 aae9a727b1ef 5e68f03fc08d ff24041fb2ac f0f772cfc12e 1ebbfbc68569 318a5cc223b5 b8559e67098a 802c3a1e9cad b62fd1734dff fc34224f55d0 dac2a23ff62a 1300b36c45bd 091967d291c6
	I0624 03:32:35.919936    6932 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 03:32:36.012904    6932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:32:36.016950    6932 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Jun 24 10:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jun 24 10:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jun 24 10:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 24 10:32 /etc/kubernetes/scheduler.conf
	
	I0624 03:32:36.016985    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf
	I0624 03:32:36.020526    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.020553    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:32:36.023951    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf
	I0624 03:32:36.027155    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.027181    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:32:36.029737    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf
	I0624 03:32:36.032669    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.032686    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:32:36.035964    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf
	I0624 03:32:36.038649    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.038672    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:32:36.041261    6932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:32:36.044620    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.066492    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.486726    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.730288    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.751816    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.772713    6932 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:32:36.772785    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:37.275116    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:37.775093    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:38.274960    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:38.774830    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:39.274826    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:39.279170    6932 api_server.go:72] duration metric: took 2.506481167s to wait for apiserver process to appear ...
	I0624 03:32:39.279179    6932 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:32:39.279187    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:44.281253    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:44.281304    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:49.281517    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:49.281559    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:54.281933    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:54.281959    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:59.282339    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:59.282363    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:04.282862    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:04.282906    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:09.283613    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:09.283638    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:14.284537    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:14.284579    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:19.285745    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:19.285875    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:24.287567    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:24.287627    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:29.289619    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:29.289643    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:34.291763    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:34.291789    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:39.293950    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:39.294165    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:39.311549    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:39.311632    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:39.324586    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:39.324654    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:39.336109    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:39.336174    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:39.346084    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:39.346142    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:39.358239    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:39.358319    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:39.372045    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:39.372112    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:39.382882    6932 logs.go:276] 0 containers: []
	W0624 03:33:39.382894    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:39.382952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:39.393111    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:39.393127    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:39.393132    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:39.404452    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:39.404463    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:39.421938    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:39.421948    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:39.436798    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:39.436810    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:39.450550    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:39.450563    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:39.574684    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:39.574696    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:39.588026    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:39.588039    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:39.599670    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:39.599679    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:39.611907    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:39.611919    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:39.626417    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:39.626427    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:39.642806    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:39.642819    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:39.654304    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:39.654315    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:39.665851    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:39.665862    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:39.704614    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:39.704623    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:39.709365    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:39.709371    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:39.734286    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:39.734293    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:39.748491    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:39.748500    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:42.264700    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:47.266963    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:47.267150    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:47.279858    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:47.279936    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:47.290733    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:47.290802    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:47.301011    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:47.301090    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:47.312146    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:47.312214    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:47.323361    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:47.323431    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:47.333598    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:47.333657    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:47.343716    6932 logs.go:276] 0 containers: []
	W0624 03:33:47.343728    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:47.343787    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:47.354443    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:47.354462    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:47.354468    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:47.395244    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:47.395267    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:47.420786    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:47.420799    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:47.436307    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:47.436318    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:47.447722    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:47.447733    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:47.461935    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:47.461948    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:47.479835    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:47.479847    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:47.491065    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:47.491078    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:47.516513    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:47.516530    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:47.529005    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:47.529017    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:47.545423    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:47.545434    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:47.557807    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:47.557823    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:47.569122    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:47.569134    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:47.573973    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:47.573982    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:47.610958    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:47.610968    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:47.623249    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:47.623263    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:47.637706    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:47.637717    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:50.152406    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:55.154765    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:55.155127    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:55.185636    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:55.185782    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:55.203540    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:55.203623    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:55.216326    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:55.216402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:55.228108    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:55.228172    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:55.238801    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:55.238861    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:55.249447    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:55.249506    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:55.260127    6932 logs.go:276] 0 containers: []
	W0624 03:33:55.260140    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:55.260207    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:55.270640    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:55.270658    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:55.270664    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:55.307893    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:55.307902    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:55.321214    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:55.321229    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:55.334086    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:55.334101    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:55.348870    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:55.348880    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:55.360958    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:55.360972    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:55.376177    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:55.376194    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:55.387354    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:55.387365    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:55.399737    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:55.399751    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:55.424663    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:55.424671    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:55.465616    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:55.465631    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:55.478333    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:55.478349    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:55.491785    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:55.491798    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:55.504817    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:55.504830    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:55.509482    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:55.509489    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:55.526095    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:55.526110    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:55.540197    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:55.540212    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:58.060266    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:03.062700    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:03.063106    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:03.095367    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:03.095521    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:03.114059    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:03.114140    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:03.127843    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:03.127915    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:03.139459    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:03.139531    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:03.151512    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:03.151575    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:03.162447    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:03.162510    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:03.174926    6932 logs.go:276] 0 containers: []
	W0624 03:34:03.174938    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:03.174993    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:03.189483    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:03.189500    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:03.189506    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:03.201293    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:03.201306    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:03.217126    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:03.217136    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:03.229093    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:03.229105    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:03.254373    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:03.254383    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:03.258577    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:03.258584    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:03.274561    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:03.274572    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:03.294042    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:03.294052    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:03.305822    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:03.305834    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:03.340338    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:03.340349    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:03.352833    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:03.352844    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:03.366276    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:03.366286    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:03.403117    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:03.403129    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:03.417115    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:03.417126    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:03.428849    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:03.428860    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:03.440671    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:03.440683    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:03.457428    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:03.457438    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:05.971583    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:10.974135    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:10.974473    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:10.998609    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:10.998725    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:11.014945    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:11.015016    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:11.026842    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:11.026908    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:11.037946    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:11.038027    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:11.048920    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:11.048988    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:11.060046    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:11.060110    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:11.071310    6932 logs.go:276] 0 containers: []
	W0624 03:34:11.071320    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:11.071371    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:11.082650    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:11.082668    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:11.082674    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:11.119551    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:11.119563    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:11.131143    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:11.131155    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:11.135722    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:11.135731    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:11.158297    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:11.158307    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:11.171765    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:11.171774    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:11.189119    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:11.189131    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:11.200862    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:11.200873    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:11.225776    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:11.225784    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:11.238849    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:11.238860    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:11.275833    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:11.275841    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:11.294406    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:11.294417    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:11.311651    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:11.311660    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:11.325202    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:11.325215    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:11.337569    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:11.337579    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:11.348932    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:11.348944    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:11.360431    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:11.360441    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:13.878351    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:18.880669    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:18.880847    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:18.896636    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:18.896707    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:18.911607    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:18.911684    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:18.922805    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:18.922882    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:18.934006    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:18.934075    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:18.944933    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:18.945002    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:18.955537    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:18.955609    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:18.965392    6932 logs.go:276] 0 containers: []
	W0624 03:34:18.965404    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:18.965454    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:18.975671    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:18.975688    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:18.975695    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:18.980404    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:18.980412    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:19.014430    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:19.014442    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:19.030768    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:19.030780    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:19.042857    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:19.042868    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:19.081897    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:19.081908    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:19.103140    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:19.103149    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:19.114520    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:19.114534    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:19.130803    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:19.130814    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:19.144463    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:19.144474    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:19.161608    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:19.161618    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:19.173765    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:19.173778    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:19.187603    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:19.187613    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:19.200294    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:19.200304    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:19.213328    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:19.213340    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:19.231113    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:19.231122    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:19.242056    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:19.242067    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:21.770168    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:26.772511    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:26.772752    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:26.794563    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:26.794665    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:26.809333    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:26.809400    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:26.821721    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:26.821785    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:26.832298    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:26.832374    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:26.843590    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:26.843662    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:26.854284    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:26.854343    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:26.864357    6932 logs.go:276] 0 containers: []
	W0624 03:34:26.864367    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:26.864418    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:26.874712    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:26.874730    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:26.874739    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:26.899321    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:26.899329    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:26.914574    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:26.914585    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:26.929568    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:26.929578    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:26.943331    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:26.943341    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:26.956935    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:26.956947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:26.970452    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:26.970466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:26.986376    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:26.986386    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:27.003640    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:27.003649    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:27.016883    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:27.016893    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:27.021782    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:27.021790    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:27.034314    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:27.034327    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:27.045606    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:27.045617    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:27.057225    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:27.057236    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:27.079019    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:27.079028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:27.090438    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:27.090449    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:27.129207    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:27.129219    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:29.667300    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:34.669952    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:34.670289    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:34.703768    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:34.703926    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:34.723067    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:34.723182    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:34.738183    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:34.738277    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:34.749917    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:34.750001    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:34.760284    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:34.760367    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:34.770797    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:34.770868    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:34.781402    6932 logs.go:276] 0 containers: []
	W0624 03:34:34.781413    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:34.781489    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:34.796071    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:34.796093    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:34.796100    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:34.812369    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:34.812384    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:34.828362    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:34.828376    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:34.841150    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:34.841162    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:34.855716    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:34.855727    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:34.873132    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:34.873143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:34.884475    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:34.884486    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:34.926220    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:34.926231    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:34.940037    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:34.940049    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:34.951303    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:34.951314    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:34.977367    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:34.977378    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:35.016430    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:35.016441    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:35.021324    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:35.021332    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:35.037571    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:35.037583    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:35.049673    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:35.049687    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:35.063622    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:35.063632    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:35.077457    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:35.077469    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:37.591588    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:42.594306    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:42.594418    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:42.606229    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:42.606321    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:42.617302    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:42.617370    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:42.628182    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:42.628249    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:42.645040    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:42.645108    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:42.656210    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:42.656273    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:42.666498    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:42.666563    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:42.676756    6932 logs.go:276] 0 containers: []
	W0624 03:34:42.676771    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:42.676830    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:42.687720    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:42.687740    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:42.687745    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:42.704798    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:42.704809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:42.721069    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:42.721080    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:42.734565    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:42.734575    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:42.749647    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:42.749658    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:42.761173    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:42.761182    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:42.773211    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:42.773222    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:42.798035    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:42.798044    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:42.810200    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:42.810210    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:42.824077    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:42.824087    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:42.837693    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:42.837703    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:42.849357    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:42.849366    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:42.861676    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:42.861687    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:42.875067    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:42.875079    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:42.888221    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:42.888231    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:42.926810    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:42.926820    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:42.930971    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:42.930978    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:45.466977    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:50.469556    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:50.469806    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:50.487767    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:50.487846    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:50.501336    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:50.501415    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:50.512962    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:50.513029    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:50.523739    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:50.523808    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:50.534399    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:50.534460    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:50.545073    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:50.545140    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:50.555146    6932 logs.go:276] 0 containers: []
	W0624 03:34:50.555160    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:50.555214    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:50.569720    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:50.569740    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:50.569746    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:50.581845    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:50.581858    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:50.586263    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:50.586270    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:50.599112    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:50.599125    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:50.613203    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:50.613213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:50.624749    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:50.624760    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:50.636116    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:50.636128    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:50.671953    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:50.671965    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:50.687933    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:50.687947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:50.699891    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:50.699901    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:50.716880    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:50.716894    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:50.741802    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:50.741809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:50.756197    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:50.756210    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:50.768003    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:50.768016    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:50.807060    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:50.807069    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:50.820515    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:50.820525    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:50.833548    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:50.833559    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:53.347777    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:58.350404    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:58.350841    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:58.387836    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:58.387977    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:58.409477    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:58.409597    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:58.424362    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:58.424437    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:58.442268    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:58.442332    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:58.453493    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:58.453570    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:58.464410    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:58.464471    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:58.475181    6932 logs.go:276] 0 containers: []
	W0624 03:34:58.475193    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:58.475255    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:58.487218    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:58.487234    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:58.487239    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:58.499534    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:58.499545    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:58.523403    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:58.523414    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:58.560022    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:58.560029    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:58.572225    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:58.572235    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:58.584309    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:58.584319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:58.602761    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:58.602771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:58.614875    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:58.614886    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:58.630347    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:58.630357    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:58.647407    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:58.647417    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:58.690111    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:58.690122    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:58.706894    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:58.706904    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:58.721261    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:58.721272    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:58.734905    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:58.734916    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:58.749165    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:58.749176    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:58.753950    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:58.753956    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:58.765886    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:58.765898    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:01.279262    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:06.279983    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:06.280133    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:06.293246    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:06.293325    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:06.304881    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:06.304950    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:06.315664    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:06.315738    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:06.326090    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:06.326163    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:06.336568    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:06.336633    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:06.347082    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:06.347149    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:06.359156    6932 logs.go:276] 0 containers: []
	W0624 03:35:06.359173    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:06.359230    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:06.370579    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:06.370596    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:06.370601    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:06.382515    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:06.382526    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:06.396154    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:06.396164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:06.413154    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:06.413164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:06.424698    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:06.424708    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:06.435473    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:06.435485    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:06.461204    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:06.461211    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:06.495673    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:06.495684    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:06.509177    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:06.509187    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:06.522036    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:06.522046    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:06.535801    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:06.535811    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:06.549190    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:06.549200    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:06.588180    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:06.588188    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:06.592753    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:06.592759    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:06.608457    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:06.608467    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:06.624043    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:06.624054    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:06.635892    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:06.635908    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:09.150150    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:14.152324    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:14.152429    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:14.163791    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:14.163862    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:14.190971    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:14.191046    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:14.202088    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:14.202158    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:14.212320    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:14.212394    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:14.222886    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:14.222953    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:14.233506    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:14.233578    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:14.243796    6932 logs.go:276] 0 containers: []
	W0624 03:35:14.243806    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:14.243865    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:14.254777    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:14.254795    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:14.254800    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:14.293779    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:14.293789    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:14.308457    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:14.308466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:14.324532    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:14.324544    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:14.336894    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:14.336905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:14.348460    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:14.348472    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:14.363321    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:14.363331    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:14.377015    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:14.377028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:14.387895    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:14.387906    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:14.402012    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:14.402022    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:14.416948    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:14.416957    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:14.428036    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:14.428046    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:14.451993    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:14.452002    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:14.463533    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:14.463547    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:14.467761    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:14.467767    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:14.502452    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:14.502466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:14.514885    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:14.514902    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:17.034480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:22.036907    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:22.037186    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:22.069278    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:22.069409    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:22.087686    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:22.087773    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:22.102437    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:22.102513    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:22.114787    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:22.114859    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:22.125123    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:22.125194    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:22.135912    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:22.135973    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:22.146453    6932 logs.go:276] 0 containers: []
	W0624 03:35:22.146466    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:22.146524    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:22.158964    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:22.158985    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:22.158991    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:22.196809    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:22.196831    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:22.211090    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:22.211102    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:22.225230    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:22.225242    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:22.240914    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:22.240924    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:22.254353    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:22.254363    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:22.265792    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:22.265805    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:22.283937    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:22.283947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:22.295246    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:22.295258    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:22.309160    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:22.309171    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:22.320379    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:22.320390    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:22.332361    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:22.332372    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:22.356921    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:22.356934    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:22.369627    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:22.369638    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:22.373963    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:22.373970    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:22.407391    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:22.407401    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:22.424869    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:22.424882    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:24.939051    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:29.941444    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:29.941885    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:29.972764    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:29.972894    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:29.992225    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:29.992318    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:30.006461    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:30.006537    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:30.023077    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:30.023143    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:30.033574    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:30.033649    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:30.044080    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:30.044149    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:30.054517    6932 logs.go:276] 0 containers: []
	W0624 03:35:30.054528    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:30.054585    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:30.065023    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:30.065043    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:30.065048    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:30.079227    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:30.079239    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:30.090701    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:30.090712    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:30.102304    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:30.102314    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:30.125311    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:30.125320    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:30.162223    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:30.162233    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:30.198647    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:30.198658    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:30.212455    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:30.212465    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:30.228027    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:30.228040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:30.244969    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:30.244980    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:30.259109    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:30.259119    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:30.271966    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:30.271977    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:30.283522    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:30.283536    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:30.295031    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:30.295044    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:30.312749    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:30.312765    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:30.317170    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:30.317175    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:30.330759    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:30.330772    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:32.846757    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:37.847101    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:37.847247    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:37.868541    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:37.868623    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:37.885324    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:37.885386    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:37.897439    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:37.897511    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:37.907978    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:37.908047    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:37.918742    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:37.918804    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:37.929823    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:37.929896    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:37.940202    6932 logs.go:276] 0 containers: []
	W0624 03:35:37.940216    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:37.940271    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:37.950462    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:37.950478    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:37.950484    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:37.961981    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:37.961992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:37.974308    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:37.974319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:37.985516    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:37.985528    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:38.022336    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:38.022347    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:38.026445    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:38.026452    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:38.040975    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:38.040984    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:38.055163    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:38.055175    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:38.068460    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:38.068471    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:38.079990    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:38.080000    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:38.118329    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:38.118339    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:38.129492    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:38.129502    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:38.146625    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:38.146633    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:38.160031    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:38.160040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:38.174083    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:38.174095    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:38.189525    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:38.189534    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:38.212369    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:38.212378    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:40.726473    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:45.729167    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:45.729616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:45.769239    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:45.769373    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:45.791124    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:45.791229    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:45.805920    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:45.805987    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:45.818995    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:45.819076    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:45.830110    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:45.830187    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:45.840934    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:45.841005    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:45.851495    6932 logs.go:276] 0 containers: []
	W0624 03:35:45.851506    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:45.851563    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:45.862066    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:45.862087    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:45.862092    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:45.877210    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:45.877221    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:45.888607    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:45.888616    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:45.911946    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:45.911955    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:45.923838    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:45.923848    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:45.945092    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:45.945103    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:45.958407    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:45.958417    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:45.972657    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:45.972668    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:45.986736    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:45.986747    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:45.998449    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:45.998460    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:46.014369    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:46.014381    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:46.051671    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:46.051679    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:46.086823    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:46.086835    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:46.101061    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:46.101072    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:46.113070    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:46.113083    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:46.117338    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:46.117345    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:46.136309    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:46.136319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:48.652946    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:53.655339    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:53.655709    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:53.687156    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:53.687292    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:53.705494    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:53.705598    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:53.719306    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:53.719382    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:53.731207    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:53.731277    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:53.743278    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:53.743351    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:53.753902    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:53.753969    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:53.764420    6932 logs.go:276] 0 containers: []
	W0624 03:35:53.764430    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:53.764483    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:53.787939    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:53.787958    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:53.787963    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:53.800436    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:53.800450    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:53.812017    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:53.812028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:53.823773    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:53.823784    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:53.847359    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:53.847369    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:53.851593    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:53.851599    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:53.888406    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:53.888420    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:53.907876    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:53.907887    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:53.920152    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:53.920164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:53.937845    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:53.937860    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:53.949243    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:53.949257    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:53.963185    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:53.963198    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:53.977539    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:53.977554    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:53.995776    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:53.995785    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:54.008056    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:54.008071    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:54.022610    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:54.022624    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:54.061913    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:54.061923    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:56.578480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:01.580122    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:01.580286    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:01.598374    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:01.598465    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:01.611859    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:01.611930    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:01.623153    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:01.623224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:01.633295    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:01.633366    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:01.647042    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:01.647106    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:01.657967    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:01.658037    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:01.668036    6932 logs.go:276] 0 containers: []
	W0624 03:36:01.668051    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:01.668108    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:01.678968    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:01.678987    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:01.678992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:01.696086    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:01.696097    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:01.707412    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:01.707423    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:01.731509    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:01.731517    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:01.743258    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:01.743269    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:01.748130    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:01.748139    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:01.760027    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:01.760040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:01.773416    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:01.773427    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:01.785695    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:01.785706    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:01.801812    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:01.801823    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:01.813019    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:01.813029    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:01.847716    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:01.847728    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:01.868528    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:01.868540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:01.882093    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:01.882103    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:01.896140    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:01.896151    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:01.908414    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:01.908424    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:01.946719    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:01.946727    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:04.460836    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:09.462816    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:09.462963    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:09.475375    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:09.475457    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:09.486375    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:09.486449    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:09.496543    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:09.496616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:09.506950    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:09.507014    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:09.517556    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:09.517626    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:09.528005    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:09.528072    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:09.541191    6932 logs.go:276] 0 containers: []
	W0624 03:36:09.541204    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:09.541262    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:09.552166    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:09.552185    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:09.552190    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:09.589932    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:09.589941    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:09.626233    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:09.626244    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:09.639114    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:09.639124    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:09.653538    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:09.653552    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:09.671907    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:09.671917    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:09.676848    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:09.676854    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:09.691077    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:09.691091    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:09.708599    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:09.708610    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:09.720495    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:09.720508    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:09.733698    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:09.733711    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:09.749129    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:09.749143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:09.761206    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:09.761216    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:09.773245    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:09.773254    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:09.784465    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:09.784473    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:09.808665    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:09.808672    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:09.819943    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:09.819957    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:12.334071    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:17.336618    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:17.336823    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:17.356312    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:17.356402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:17.370312    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:17.370389    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:17.381747    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:17.381814    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:17.392492    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:17.392560    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:17.403539    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:17.403604    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:17.413655    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:17.413719    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:17.423727    6932 logs.go:276] 0 containers: []
	W0624 03:36:17.423740    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:17.423797    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:17.433896    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:17.433913    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:17.433918    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:17.448585    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:17.448595    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:17.462206    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:17.462219    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:17.477238    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:17.477250    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:17.499853    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:17.499863    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:17.515221    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:17.515232    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:17.526143    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:17.526157    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:17.541604    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:17.541619    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:17.545943    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:17.545950    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:17.594276    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:17.594288    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:17.608529    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:17.608539    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:17.621776    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:17.621787    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:17.639640    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:17.639650    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:17.651561    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:17.651571    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:17.689830    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:17.689855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:17.703060    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:17.703071    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:17.715089    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:17.715099    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:20.230391    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:25.232782    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:25.233224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:25.275174    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:25.275306    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:25.293151    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:25.293241    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:25.306985    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:25.307060    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:25.318931    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:25.318998    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:25.329544    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:25.329605    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:25.340091    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:25.340162    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:25.349831    6932 logs.go:276] 0 containers: []
	W0624 03:36:25.349844    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:25.349902    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:25.359906    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:25.359926    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:25.359931    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:25.372103    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:25.372114    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:25.409426    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:25.409435    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:25.427015    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:25.427024    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:25.438056    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:25.438067    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:25.452382    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:25.452391    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:25.468253    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:25.468265    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:25.503001    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:25.503010    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:25.516527    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:25.516539    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:25.528290    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:25.528301    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:25.542851    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:25.542862    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:25.558341    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:25.558354    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:25.575876    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:25.575887    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:25.587218    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:25.587229    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:25.611121    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:25.611129    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:25.625986    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:25.625997    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:25.630369    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:25.630376    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:28.145235    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:33.147476    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:33.147795    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:33.183751    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:33.183891    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:33.209052    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:33.209146    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:33.237810    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:33.237879    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:33.251948    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:33.252015    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:33.262721    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:33.262788    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:33.273504    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:33.273569    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:33.283882    6932 logs.go:276] 0 containers: []
	W0624 03:36:33.283896    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:33.283947    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:33.295142    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:33.295162    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:33.295168    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:33.299986    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:33.299994    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:33.311543    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:33.311557    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:33.335402    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:33.335412    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:33.347705    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:33.347717    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:33.361845    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:33.361855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:33.374699    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:33.374711    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:33.414789    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:33.414801    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:33.428124    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:33.428138    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:33.443383    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:33.443398    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:33.457227    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:33.457241    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:33.497470    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:33.497484    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:33.511639    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:33.511652    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:33.523892    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:33.523905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:33.539199    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:33.539212    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:33.555834    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:33.555847    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:33.569131    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:33.569143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:36.091071    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:41.093296    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:41.093375    6932 kubeadm.go:591] duration metric: took 4m5.204984917s to restartPrimaryControlPlane
	W0624 03:36:41.093422    6932 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0624 03:36:41.093442    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0624 03:36:42.106857    6932 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.013412791s)
	I0624 03:36:42.106930    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 03:36:42.111793    6932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:36:42.114594    6932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:36:42.117113    6932 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:36:42.117120    6932 kubeadm.go:156] found existing configuration files:
	
	I0624 03:36:42.117142    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf
	I0624 03:36:42.119609    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:36:42.119632    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:36:42.122233    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf
	I0624 03:36:42.124497    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:36:42.124515    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:36:42.127464    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf
	I0624 03:36:42.130941    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:36:42.130983    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:36:42.133632    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf
	I0624 03:36:42.136115    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:36:42.136139    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:36:42.139100    6932 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 03:36:42.155936    6932 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0624 03:36:42.155983    6932 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 03:36:42.216471    6932 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 03:36:42.216529    6932 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 03:36:42.216573    6932 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 03:36:42.265566    6932 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 03:36:42.273805    6932 out.go:204]   - Generating certificates and keys ...
	I0624 03:36:42.273841    6932 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 03:36:42.273874    6932 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 03:36:42.273927    6932 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 03:36:42.273956    6932 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0624 03:36:42.273990    6932 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0624 03:36:42.274013    6932 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0624 03:36:42.274046    6932 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0624 03:36:42.274082    6932 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0624 03:36:42.274119    6932 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 03:36:42.274152    6932 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 03:36:42.274180    6932 kubeadm.go:309] [certs] Using the existing "sa" key
	I0624 03:36:42.274210    6932 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 03:36:42.307482    6932 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 03:36:42.348811    6932 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 03:36:42.444238    6932 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 03:36:42.513080    6932 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 03:36:42.543464    6932 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 03:36:42.543940    6932 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 03:36:42.544063    6932 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 03:36:42.631858    6932 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 03:36:42.640012    6932 out.go:204]   - Booting up control plane ...
	I0624 03:36:42.640067    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 03:36:42.640111    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 03:36:42.640148    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 03:36:42.640190    6932 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 03:36:42.640267    6932 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0624 03:36:47.138144    6932 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501909 seconds
	I0624 03:36:47.138343    6932 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 03:36:47.141692    6932 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 03:36:47.650952    6932 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 03:36:47.651047    6932 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-398000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 03:36:48.155137    6932 kubeadm.go:309] [bootstrap-token] Using token: abt9zh.ri93u3l2pr9sv07s
	I0624 03:36:48.161393    6932 out.go:204]   - Configuring RBAC rules ...
	I0624 03:36:48.161452    6932 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 03:36:48.161491    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 03:36:48.166377    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 03:36:48.167148    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 03:36:48.167969    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 03:36:48.168811    6932 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 03:36:48.172980    6932 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 03:36:48.349732    6932 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 03:36:48.559338    6932 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 03:36:48.559903    6932 kubeadm.go:309] 
	I0624 03:36:48.559938    6932 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 03:36:48.559941    6932 kubeadm.go:309] 
	I0624 03:36:48.560033    6932 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 03:36:48.560038    6932 kubeadm.go:309] 
	I0624 03:36:48.560051    6932 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 03:36:48.560084    6932 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 03:36:48.560117    6932 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 03:36:48.560120    6932 kubeadm.go:309] 
	I0624 03:36:48.560145    6932 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 03:36:48.560150    6932 kubeadm.go:309] 
	I0624 03:36:48.560173    6932 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 03:36:48.560175    6932 kubeadm.go:309] 
	I0624 03:36:48.560199    6932 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 03:36:48.560275    6932 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 03:36:48.560339    6932 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 03:36:48.560345    6932 kubeadm.go:309] 
	I0624 03:36:48.560387    6932 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 03:36:48.560515    6932 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 03:36:48.560520    6932 kubeadm.go:309] 
	I0624 03:36:48.560614    6932 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token abt9zh.ri93u3l2pr9sv07s \
	I0624 03:36:48.560681    6932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 \
	I0624 03:36:48.560695    6932 kubeadm.go:309] 	--control-plane 
	I0624 03:36:48.560697    6932 kubeadm.go:309] 
	I0624 03:36:48.560738    6932 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 03:36:48.560741    6932 kubeadm.go:309] 
	I0624 03:36:48.560796    6932 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token abt9zh.ri93u3l2pr9sv07s \
	I0624 03:36:48.560934    6932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 
	I0624 03:36:48.561022    6932 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 03:36:48.561031    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:36:48.561043    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:36:48.565257    6932 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0624 03:36:48.573248    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0624 03:36:48.577640    6932 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0624 03:36:48.584525    6932 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 03:36:48.584611    6932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:36:48.584612    6932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-398000 minikube.k8s.io/updated_at=2024_06_24T03_36_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=running-upgrade-398000 minikube.k8s.io/primary=true
	I0624 03:36:48.629667    6932 kubeadm.go:1107] duration metric: took 45.095917ms to wait for elevateKubeSystemPrivileges
	I0624 03:36:48.629674    6932 ops.go:34] apiserver oom_adj: -16
	W0624 03:36:48.629698    6932 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 03:36:48.629702    6932 kubeadm.go:393] duration metric: took 4m12.755816875s to StartCluster
	I0624 03:36:48.629711    6932 settings.go:142] acquiring lock: {Name:mk350ce6fa96c4a87ff2b5575a8be101ddfe67cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:48.629807    6932 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:36:48.630225    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:48.630434    6932 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:36:48.630491    6932 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 03:36:48.630542    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:36:48.630552    6932 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-398000"
	I0624 03:36:48.630565    6932 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-398000"
	I0624 03:36:48.630572    6932 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-398000"
	I0624 03:36:48.630586    6932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-398000"
	W0624 03:36:48.630576    6932 addons.go:243] addon storage-provisioner should already be in state true
	I0624 03:36:48.630624    6932 host.go:66] Checking if "running-upgrade-398000" exists ...
	I0624 03:36:48.631635    6932 kapi.go:59] client config for running-upgrade-398000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10655ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:36:48.631761    6932 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-398000"
	W0624 03:36:48.631765    6932 addons.go:243] addon default-storageclass should already be in state true
	I0624 03:36:48.631772    6932 host.go:66] Checking if "running-upgrade-398000" exists ...
	I0624 03:36:48.633278    6932 out.go:177] * Verifying Kubernetes components...
	I0624 03:36:48.633710    6932 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:48.637363    6932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 03:36:48.637369    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:36:48.640213    6932 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:36:48.644191    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:36:48.648284    6932 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:48.648291    6932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 03:36:48.648297    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:36:48.726790    6932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:36:48.731821    6932 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:36:48.731866    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:36:48.735983    6932 api_server.go:72] duration metric: took 105.53875ms to wait for apiserver process to appear ...
	I0624 03:36:48.735991    6932 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:36:48.735997    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:48.773546    6932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:48.784476    6932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:53.738091    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:53.738136    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:58.738470    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:58.738513    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:03.738878    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:03.738912    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:08.739344    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:08.739376    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:13.739954    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:13.739974    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:18.740716    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:18.740776    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0624 03:37:19.115134    6932 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0624 03:37:19.119711    6932 out.go:177] * Enabled addons: storage-provisioner
	I0624 03:37:19.131697    6932 addons.go:510] duration metric: took 30.501477125s for enable addons: enabled=[storage-provisioner]
	I0624 03:37:23.741774    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:23.741800    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:28.743021    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:28.743067    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:33.744707    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:33.744746    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:38.745664    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:38.745686    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:43.747842    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:43.747881    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:48.748639    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:48.748751    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:48.764117    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:37:48.764185    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:48.774522    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:37:48.774578    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:48.785140    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:37:48.785206    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:48.795701    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:37:48.795760    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:48.806170    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:37:48.806231    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:48.820432    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:37:48.820497    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:48.830135    6932 logs.go:276] 0 containers: []
	W0624 03:37:48.830145    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:48.830192    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:48.842499    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:37:48.842515    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:37:48.842521    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:37:48.853996    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:37:48.854008    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:37:48.865030    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:48.865040    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:48.901362    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:37:48.901380    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:37:48.916314    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:37:48.916324    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:37:48.930533    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:37:48.930541    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:37:48.942512    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:37:48.942521    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:37:48.953914    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:37:48.953925    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:37:48.968934    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:37:48.968944    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:48.979926    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:48.979937    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:48.984461    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:48.984468    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:49.020253    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:37:49.020265    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:37:49.037569    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:49.037580    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:51.562823    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:56.565094    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:56.565275    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:56.578888    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:37:56.578969    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:56.590714    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:37:56.590780    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:56.604523    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:37:56.604589    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:56.615235    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:37:56.615309    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:56.625834    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:37:56.625904    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:56.636545    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:37:56.636616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:56.651791    6932 logs.go:276] 0 containers: []
	W0624 03:37:56.651804    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:56.651857    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:56.662437    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:37:56.662451    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:37:56.662456    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:37:56.679628    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:56.679640    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:56.715050    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:56.715057    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:56.719442    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:37:56.719449    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:37:56.733187    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:37:56.733201    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:37:56.745726    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:37:56.745740    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:37:56.760475    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:56.760485    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:56.783576    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:37:56.783585    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:56.796203    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:56.796215    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:56.834689    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:37:56.834700    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:37:56.848529    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:37:56.848540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:37:56.860309    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:37:56.860319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:37:56.872358    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:37:56.872368    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:37:59.385454    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:04.387622    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:04.387809    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:04.405479    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:04.405564    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:04.420553    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:04.420645    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:04.433742    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:04.433806    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:04.448274    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:04.448341    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:04.459113    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:04.459182    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:04.469388    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:04.469451    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:04.479709    6932 logs.go:276] 0 containers: []
	W0624 03:38:04.479722    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:04.479778    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:04.489940    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:04.489957    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:04.489963    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:04.506931    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:04.506942    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:04.532217    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:04.532225    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:04.567338    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:04.567344    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:04.602468    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:04.602477    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:04.617201    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:04.617213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:04.631167    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:04.631177    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:04.643187    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:04.643199    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:04.655643    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:04.655655    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:04.660443    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:04.660454    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:04.672197    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:04.672206    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:04.683783    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:04.683793    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:04.698564    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:04.698574    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:07.219516    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:12.221790    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:12.222017    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:12.249441    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:12.249529    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:12.262756    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:12.262836    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:12.274337    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:12.274402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:12.284752    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:12.284811    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:12.295223    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:12.295284    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:12.305961    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:12.306031    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:12.321087    6932 logs.go:276] 0 containers: []
	W0624 03:38:12.321098    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:12.321151    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:12.331851    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:12.331866    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:12.331872    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:12.344506    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:12.344516    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:12.362187    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:12.362200    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:12.395791    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:12.395798    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:12.434259    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:12.434273    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:12.448665    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:12.448678    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:12.463271    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:12.463281    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:12.477533    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:12.477542    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:12.492545    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:12.492558    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:12.503779    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:12.503789    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:12.515284    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:12.515292    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:12.520133    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:12.520141    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:12.531598    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:12.531611    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:15.056941    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:20.057457    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:20.057645    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:20.069774    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:20.069894    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:20.084451    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:20.084520    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:20.095090    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:20.095155    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:20.105531    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:20.105591    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:20.115762    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:20.115815    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:20.126166    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:20.126222    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:20.136384    6932 logs.go:276] 0 containers: []
	W0624 03:38:20.136395    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:20.136450    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:20.146879    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:20.146894    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:20.146899    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:20.181304    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:20.181311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:20.185552    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:20.185559    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:20.199111    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:20.199120    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:20.212962    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:20.212976    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:20.224626    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:20.224639    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:20.249651    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:20.249662    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:20.261475    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:20.261498    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:20.298340    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:20.298353    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:20.309852    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:20.309863    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:20.321734    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:20.321748    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:20.337078    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:20.337092    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:20.355365    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:20.355379    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:22.867202    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:27.869837    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:27.870280    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:27.903929    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:27.904066    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:27.924162    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:27.924268    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:27.943240    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:27.943317    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:27.955030    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:27.955101    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:27.966612    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:27.966693    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:27.982095    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:27.982168    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:27.993038    6932 logs.go:276] 0 containers: []
	W0624 03:38:27.993050    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:27.993107    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:28.003854    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:28.003870    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:28.003876    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:28.023222    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:28.023233    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:28.035650    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:28.035660    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:28.053470    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:28.053479    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:28.065667    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:28.065676    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:28.100340    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:28.100348    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:28.104660    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:28.104665    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:28.140989    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:28.141001    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:28.155090    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:28.155100    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:28.168965    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:28.168980    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:28.181314    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:28.181324    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:28.192877    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:28.192892    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:28.216815    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:28.216822    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:30.729893    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:35.732055    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:35.732197    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:35.744206    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:35.744285    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:35.755014    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:35.755081    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:35.765430    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:35.765491    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:35.775610    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:35.775676    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:35.786294    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:35.786360    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:35.796590    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:35.796653    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:35.806966    6932 logs.go:276] 0 containers: []
	W0624 03:38:35.806978    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:35.807036    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:35.817862    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:35.817877    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:35.817882    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:35.829261    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:35.829271    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:35.843934    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:35.843944    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:35.855983    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:35.855996    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:35.873489    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:35.873500    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:35.907832    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:35.907843    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:35.912315    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:35.912320    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:35.926386    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:35.926395    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:35.937595    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:35.937605    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:35.961547    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:35.961555    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:35.972610    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:35.972623    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:36.006383    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:36.006396    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:36.025274    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:36.025284    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:38.539096    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:43.541318    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:43.541480    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:43.560592    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:43.560669    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:43.574646    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:43.574720    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:43.585715    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:43.585780    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:43.596463    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:43.596529    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:43.606994    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:43.607064    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:43.617986    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:43.618050    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:43.628061    6932 logs.go:276] 0 containers: []
	W0624 03:38:43.628073    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:43.628126    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:43.639060    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:43.639076    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:43.639081    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:43.654055    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:43.654065    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:43.666984    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:43.666994    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:43.681624    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:43.681634    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:43.693145    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:43.693155    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:43.704620    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:43.704630    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:43.739329    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:43.739341    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:43.774168    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:43.774180    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:43.788739    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:43.788748    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:43.811938    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:43.811945    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:43.823516    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:43.823526    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:43.828983    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:43.828992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:43.841807    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:43.841817    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:46.362992    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:51.365289    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:51.365538    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:51.384348    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:51.384438    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:51.398499    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:51.398576    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:51.410010    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:51.410081    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:51.420292    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:51.420360    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:51.432975    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:51.433041    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:51.443559    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:51.443628    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:51.453786    6932 logs.go:276] 0 containers: []
	W0624 03:38:51.453799    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:51.453859    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:51.464093    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:51.464106    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:51.464111    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:51.478202    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:51.478214    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:51.490003    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:51.490014    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:51.501860    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:51.501869    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:51.513792    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:51.513802    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:51.530981    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:51.530992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:51.543061    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:51.543071    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:51.567727    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:51.567736    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:51.603606    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:51.603617    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:51.608210    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:51.608221    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:51.621529    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:51.621540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:51.636742    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:51.636751    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:51.648426    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:51.648434    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:54.186043    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:59.188263    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:59.188457    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:59.207691    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:59.207781    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:59.222006    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:59.222068    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:59.233545    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:59.233599    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:59.245634    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:59.245702    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:59.255859    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:59.255918    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:59.265980    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:59.266046    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:59.275825    6932 logs.go:276] 0 containers: []
	W0624 03:38:59.275837    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:59.275891    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:59.286467    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:59.286483    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:59.286488    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:59.303410    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:59.303420    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:59.315018    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:59.315032    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:59.349230    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:59.349237    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:59.353417    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:59.353425    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:59.367761    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:59.367769    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:59.379757    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:59.379769    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:59.391500    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:59.391511    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:59.403653    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:59.403668    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:59.427580    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:59.427592    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:59.438969    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:59.438983    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:59.472647    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:59.472657    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:59.487057    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:59.487071    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:02.002911    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:07.005178    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:07.005364    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:07.026033    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:07.026133    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:07.039692    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:07.039764    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:07.052112    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:07.052180    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:07.062964    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:07.063028    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:07.074310    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:07.074371    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:07.085709    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:07.085775    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:07.095649    6932 logs.go:276] 0 containers: []
	W0624 03:39:07.095663    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:07.095711    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:07.112565    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:07.112584    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:07.112589    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:07.131848    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:07.131859    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:07.155628    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:07.155637    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:07.167076    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:07.167089    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:07.182524    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:07.182533    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:07.197270    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:07.197289    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:07.211729    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:07.211739    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:07.245174    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:07.245182    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:07.249203    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:07.249209    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:07.261243    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:07.261254    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:07.272714    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:07.272723    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:07.284555    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:07.284569    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:07.298395    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:07.298409    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:07.311427    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:07.311437    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:07.349539    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:07.349548    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:09.862641    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:14.864926    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:14.865211    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:14.892864    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:14.892965    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:14.910511    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:14.910596    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:14.923753    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:14.923829    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:14.935452    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:14.935517    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:14.945889    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:14.945958    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:14.956398    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:14.956464    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:14.966445    6932 logs.go:276] 0 containers: []
	W0624 03:39:14.966458    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:14.966508    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:14.976971    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:14.976989    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:14.976995    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:14.981430    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:14.981439    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:15.015471    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:15.015483    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:15.027638    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:15.027650    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:15.063004    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:15.063025    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:15.075567    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:15.075578    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:15.101205    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:15.101213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:15.115313    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:15.115323    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:15.133147    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:15.133159    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:15.147638    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:15.147653    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:15.161699    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:15.161709    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:15.172907    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:15.172918    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:15.184430    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:15.184444    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:15.196399    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:15.196409    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:15.211079    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:15.211091    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:17.724789    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:22.727009    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:22.727166    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:22.741164    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:22.741224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:22.751949    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:22.752006    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:22.762737    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:22.762809    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:22.773546    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:22.773616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:22.783976    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:22.784030    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:22.794363    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:22.794421    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:22.804521    6932 logs.go:276] 0 containers: []
	W0624 03:39:22.804533    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:22.804586    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:22.814780    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:22.814797    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:22.814805    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:22.826433    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:22.826448    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:22.837712    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:22.837724    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:22.851824    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:22.851834    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:22.865841    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:22.865855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:22.877076    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:22.877089    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:22.888529    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:22.888539    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:22.923149    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:22.923160    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:22.934316    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:22.934330    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:22.959289    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:22.959299    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:22.974053    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:22.974062    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:22.989759    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:22.989771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:23.006387    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:23.006398    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:23.018208    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:23.018222    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:23.052431    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:23.052439    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:25.558625    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:30.560998    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:30.561362    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:30.592700    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:30.592837    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:30.615608    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:30.615713    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:30.629449    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:30.629523    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:30.651893    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:30.651958    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:30.662948    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:30.663009    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:30.674396    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:30.674456    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:30.685049    6932 logs.go:276] 0 containers: []
	W0624 03:39:30.685060    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:30.685112    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:30.696970    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:30.696988    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:30.696993    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:30.732302    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:30.732311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:30.736528    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:30.736537    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:30.771137    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:30.771147    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:30.788920    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:30.788929    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:30.806224    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:30.806234    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:30.818151    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:30.818160    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:30.833733    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:30.833743    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:30.845115    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:30.845126    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:30.856821    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:30.856831    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:30.868413    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:30.868424    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:30.883100    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:30.883110    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:30.896175    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:30.896187    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:30.911666    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:30.911676    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:30.937155    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:30.937163    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:33.449331    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:38.451617    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:38.451763    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:38.464445    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:38.464525    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:38.475838    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:38.475916    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:38.486137    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:38.486203    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:38.496760    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:38.496831    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:38.512557    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:38.512628    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:38.523056    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:38.523118    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:38.533545    6932 logs.go:276] 0 containers: []
	W0624 03:39:38.533556    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:38.533613    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:38.544081    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:38.544097    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:38.544102    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:38.580774    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:38.580790    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:38.593546    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:38.593561    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:38.631331    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:38.631342    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:38.646620    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:38.646629    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:38.658593    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:38.658604    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:38.677762    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:38.677771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:38.695173    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:38.695182    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:38.709301    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:38.709317    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:38.721431    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:38.721441    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:38.732570    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:38.732581    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:38.736808    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:38.736818    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:38.751569    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:38.751579    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:38.763851    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:38.763864    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:38.782134    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:38.782146    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:41.309055    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:46.311324    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:46.311503    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:46.327810    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:46.327897    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:46.340831    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:46.340906    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:46.351953    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:46.352025    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:46.366450    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:46.366509    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:46.377328    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:46.377394    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:46.388840    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:46.388901    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:46.399707    6932 logs.go:276] 0 containers: []
	W0624 03:39:46.399718    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:46.399777    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:46.410803    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:46.410820    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:46.410825    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:46.429115    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:46.429128    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:46.448424    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:46.448434    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:46.459692    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:46.459706    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:46.483081    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:46.483090    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:46.498938    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:46.498950    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:46.533886    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:46.533896    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:46.547981    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:46.547992    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:46.552610    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:46.552619    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:46.563877    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:46.563888    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:46.575962    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:46.575977    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:46.591106    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:46.591117    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:46.603758    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:46.603774    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:46.616362    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:46.616373    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:46.633947    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:46.633957    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:49.173384    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:54.175596    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:54.175805    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:54.196556    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:54.196647    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:54.212852    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:54.212942    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:54.225080    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:54.225142    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:54.236164    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:54.236230    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:54.248269    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:54.248341    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:54.258665    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:54.258726    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:54.268923    6932 logs.go:276] 0 containers: []
	W0624 03:39:54.268934    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:54.268981    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:54.279292    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:54.279309    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:54.279314    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:54.294964    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:54.294975    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:54.307087    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:54.307097    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:54.318841    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:54.318851    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:54.333267    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:54.333278    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:54.351838    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:54.351847    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:54.364374    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:54.364384    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:54.378300    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:54.378309    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:54.402275    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:54.402282    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:54.406945    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:54.406951    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:54.422607    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:54.422618    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:54.459238    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:54.459247    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:54.493883    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:54.493893    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:54.508275    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:54.508291    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:54.521110    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:54.521126    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:57.038662    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:02.040891    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:02.041098    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:02.060654    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:02.060733    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:02.074711    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:02.074787    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:02.086971    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:02.087071    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:02.097929    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:02.098003    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:02.108176    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:02.108244    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:02.121673    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:02.121737    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:02.132059    6932 logs.go:276] 0 containers: []
	W0624 03:40:02.132071    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:02.132126    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:02.146637    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:02.146663    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:02.146670    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:02.158075    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:02.158085    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:02.162991    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:02.162998    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:02.197581    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:02.197592    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:02.209270    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:02.209279    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:02.220833    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:02.220844    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:02.237184    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:02.237195    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:02.251091    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:02.251101    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:02.263065    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:02.263076    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:02.278173    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:02.278183    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:02.296642    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:02.296652    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:02.307872    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:02.307881    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:02.320052    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:02.320062    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:02.353725    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:02.353732    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:02.369725    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:02.369734    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:04.896892    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:09.898382    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:09.898587    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:09.920532    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:09.920630    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:09.935805    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:09.935887    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:09.948366    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:09.948446    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:09.960370    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:09.960436    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:09.970739    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:09.970798    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:09.981192    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:09.981253    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:09.990870    6932 logs.go:276] 0 containers: []
	W0624 03:40:09.990884    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:09.990945    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:10.006038    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:10.006056    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:10.006060    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:10.020657    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:10.020668    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:10.032266    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:10.032276    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:10.044038    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:10.044048    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:10.061674    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:10.061684    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:10.066352    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:10.066360    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:10.078240    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:10.078251    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:10.092689    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:10.092701    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:10.106551    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:10.106562    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:10.118386    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:10.118398    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:10.131419    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:10.131429    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:10.167403    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:10.167413    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:10.181950    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:10.181960    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:10.205259    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:10.205266    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:10.219225    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:10.219236    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:12.757129    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:17.759383    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:17.759521    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:17.780220    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:17.780301    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:17.794885    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:17.794952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:17.810931    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:17.810998    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:17.821741    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:17.821800    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:17.831820    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:17.831879    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:17.842089    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:17.842159    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:17.852030    6932 logs.go:276] 0 containers: []
	W0624 03:40:17.852043    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:17.852099    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:17.862679    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:17.862696    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:17.862702    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:17.899103    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:17.899113    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:17.913374    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:17.913388    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:17.949340    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:17.949348    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:17.961868    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:17.961878    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:17.986321    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:17.986329    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:17.997759    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:17.997769    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:18.002655    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:18.002662    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:18.014732    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:18.014743    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:18.032993    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:18.033004    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:18.044395    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:18.044404    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:18.056244    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:18.056255    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:18.068228    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:18.068239    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:18.080044    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:18.080057    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:18.095173    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:18.095183    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:20.611172    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:25.613490    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:25.613992    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:25.653338    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:25.653471    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:25.675354    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:25.675463    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:25.691963    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:25.692045    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:25.713169    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:25.713243    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:25.730998    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:25.731080    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:25.747460    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:25.747542    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:25.758357    6932 logs.go:276] 0 containers: []
	W0624 03:40:25.758371    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:25.758434    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:25.769082    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:25.769101    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:25.769106    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:25.783308    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:25.783318    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:25.795530    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:25.795540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:25.813400    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:25.813411    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:25.825653    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:25.825662    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:25.848527    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:25.848537    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:25.860521    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:25.860534    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:25.872321    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:25.872331    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:25.887848    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:25.887862    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:25.899472    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:25.899486    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:25.934782    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:25.934791    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:25.939000    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:25.939008    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:25.974787    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:25.974802    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:25.987219    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:25.987230    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:25.999428    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:25.999440    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:28.516380    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:33.518608    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:33.518795    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:33.536443    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:33.536524    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:33.549427    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:33.549499    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:33.560538    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:33.560617    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:33.571054    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:33.571128    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:33.585627    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:33.585697    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:33.596720    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:33.596784    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:33.606881    6932 logs.go:276] 0 containers: []
	W0624 03:40:33.606893    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:33.606952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:33.617937    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:33.617955    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:33.617961    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:33.629471    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:33.629483    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:33.641169    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:33.641179    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:33.652424    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:33.652435    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:33.664142    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:33.664152    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:33.684621    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:33.684633    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:33.698712    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:33.698724    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:33.710300    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:33.710311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:33.724039    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:33.724050    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:33.729976    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:33.729984    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:33.765123    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:33.765136    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:33.779779    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:33.779794    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:33.816164    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:33.816173    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:33.831081    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:33.831090    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:33.842736    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:33.842746    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:36.367589    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:41.369760    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:41.369967    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:41.389203    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:41.389305    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:41.403076    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:41.403154    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:41.416339    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:41.416404    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:41.428461    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:41.428525    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:41.442435    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:41.442503    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:41.452746    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:41.452811    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:41.462953    6932 logs.go:276] 0 containers: []
	W0624 03:40:41.462967    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:41.463019    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:41.474070    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:41.474087    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:41.474093    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:41.509550    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:41.509560    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:41.521428    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:41.521437    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:41.532905    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:41.532915    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:41.555505    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:41.555515    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:41.570797    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:41.570809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:41.583013    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:41.583023    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:41.598927    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:41.598938    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:41.617101    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:41.617111    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:41.629350    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:41.629360    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:41.664940    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:41.664956    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:41.669560    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:41.669565    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:41.684032    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:41.684046    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:41.695532    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:41.695550    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:41.706891    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:41.706905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:44.220480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:49.222758    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:49.227140    6932 out.go:177] 
	W0624 03:40:49.230146    6932 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0624 03:40:49.230154    6932 out.go:239] * 
	* 
	W0624 03:40:49.230663    6932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:40:49.241004    6932 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-398000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-06-24 03:40:49.330506 -0700 PDT m=+1323.805338626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-398000 -n running-upgrade-398000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-398000 -n running-upgrade-398000: exit status 2 (15.695435083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-398000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-948000         | test-preload-948000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p scheduled-stop-300000       | scheduled-stop-300000     | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --memory=2048 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-300000       | scheduled-stop-300000     | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p skaffold-135000             | skaffold-135000           | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --memory=2600 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p skaffold-135000             | skaffold-135000           | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p offline-docker-953000       | offline-docker-953000     | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| delete  | -p offline-docker-953000       | offline-docker-953000     | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p kubernetes-upgrade-786000   | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --no-kubernetes --driver=qemu2 |                           |         |         |                     |                     |
	|         |                                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --no-kubernetes --driver=qemu2 |                           |         |         |                     |                     |
	|         |                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-786000   | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p kubernetes-upgrade-786000   | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-996000 sudo    | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
	| delete  | -p kubernetes-upgrade-786000   | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| start   | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:31 PDT |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-252000      | minikube                  | jenkins | v1.26.0 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-996000 sudo    | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:31 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-996000         | NoKubernetes-996000       | jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	| start   | -p running-upgrade-398000      | minikube                  | jenkins | v1.26.0 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:32 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-252000 stop    | minikube                  | jenkins | v1.26.0 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	| start   | -p stopped-upgrade-252000      | stopped-upgrade-252000    | jenkins | v1.33.1 | 24 Jun 24 03:31 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p running-upgrade-398000      | running-upgrade-398000    | jenkins | v1.33.1 | 24 Jun 24 03:32 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:32:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:32:14.091013    6932 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:32:14.091178    6932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:32:14.091181    6932 out.go:304] Setting ErrFile to fd 2...
	I0624 03:32:14.091184    6932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:32:14.091323    6932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:32:14.092496    6932 out.go:298] Setting JSON to false
	I0624 03:32:14.108906    6932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5504,"bootTime":1719219630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:32:14.108964    6932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:32:14.114595    6932 out.go:177] * [running-upgrade-398000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:32:14.122598    6932 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:32:14.122689    6932 notify.go:220] Checking for updates...
	I0624 03:32:14.130568    6932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:14.134647    6932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:32:14.137616    6932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:32:14.140613    6932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:32:14.143606    6932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:32:14.146858    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:14.149570    6932 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0624 03:32:14.152562    6932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:32:14.155592    6932 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:32:14.162581    6932 start.go:297] selected driver: qemu2
	I0624 03:32:14.162587    6932 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:14.162630    6932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:32:14.164724    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:32:14.164742    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:14.164770    6932 start.go:340] cluster config:
	{Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:14.164817    6932 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:32:14.172585    6932 out.go:177] * Starting "running-upgrade-398000" primary control-plane node in "running-upgrade-398000" cluster
	I0624 03:32:14.719614    6914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/config.json ...
	I0624 03:32:14.719943    6914 machine.go:94] provisionDockerMachine start ...
	I0624 03:32:14.720015    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.720277    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.720286    6914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:32:14.783313    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 03:32:14.783327    6914 buildroot.go:166] provisioning hostname "stopped-upgrade-252000"
	I0624 03:32:14.783381    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.783531    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.783538    6914 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-252000 && echo "stopped-upgrade-252000" | sudo tee /etc/hostname
	I0624 03:32:14.844972    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-252000
	
	I0624 03:32:14.845036    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.845166    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.845178    6914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-252000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-252000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-252000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:32:14.906871    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:14.906890    6914 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19124-4612/.minikube CaCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19124-4612/.minikube}
	I0624 03:32:14.906900    6914 buildroot.go:174] setting up certificates
	I0624 03:32:14.906905    6914 provision.go:84] configureAuth start
	I0624 03:32:14.906909    6914 provision.go:143] copyHostCerts
	I0624 03:32:14.907000    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem, removing ...
	I0624 03:32:14.907006    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem
	I0624 03:32:14.907118    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem (1082 bytes)
	I0624 03:32:14.907295    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem, removing ...
	I0624 03:32:14.907298    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem
	I0624 03:32:14.907337    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem (1123 bytes)
	I0624 03:32:14.907439    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem, removing ...
	I0624 03:32:14.907442    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem
	I0624 03:32:14.907487    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem (1679 bytes)
	I0624 03:32:14.907577    6914 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-252000 san=[127.0.0.1 localhost minikube stopped-upgrade-252000]
	I0624 03:32:14.952653    6914 provision.go:177] copyRemoteCerts
	I0624 03:32:14.952681    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:32:14.952687    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:14.986509    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:32:14.993388    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0624 03:32:14.999836    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0624 03:32:15.006656    6914 provision.go:87] duration metric: took 99.745042ms to configureAuth
	I0624 03:32:15.006665    6914 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:32:15.006777    6914 config.go:182] Loaded profile config "stopped-upgrade-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:15.006811    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.006893    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.006897    6914 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:32:14.176566    6932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:14.176580    6932 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0624 03:32:14.176585    6932 cache.go:56] Caching tarball of preloaded images
	I0624 03:32:14.176634    6932 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:32:14.176638    6932 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0624 03:32:14.176694    6932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/config.json ...
	I0624 03:32:14.177091    6932 start.go:360] acquireMachinesLock for running-upgrade-398000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:32:15.574890    6932 start.go:364] duration metric: took 1.397803083s to acquireMachinesLock for "running-upgrade-398000"
	I0624 03:32:15.574910    6932 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:32:15.574927    6932 fix.go:54] fixHost starting: 
	I0624 03:32:15.575717    6932 fix.go:112] recreateIfNeeded on running-upgrade-398000: state=Running err=<nil>
	W0624 03:32:15.575727    6932 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:32:15.582827    6932 out.go:177] * Updating the running qemu2 "running-upgrade-398000" VM ...
	I0624 03:32:15.065024    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:32:15.065033    6914 buildroot.go:70] root file system type: tmpfs
	I0624 03:32:15.065100    6914 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:32:15.065144    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.065288    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.065321    6914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:32:15.128844    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:32:15.128889    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.129011    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.129020    6914 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:32:15.471994    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 03:32:15.472010    6914 machine.go:97] duration metric: took 752.064209ms to provisionDockerMachine
	I0624 03:32:15.472016    6914 start.go:293] postStartSetup for "stopped-upgrade-252000" (driver="qemu2")
	I0624 03:32:15.472022    6914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:32:15.472079    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:32:15.472090    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:15.506683    6914 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:32:15.507951    6914 info.go:137] Remote host: Buildroot 2021.02.12
	I0624 03:32:15.507959    6914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/addons for local assets ...
	I0624 03:32:15.508033    6914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/files for local assets ...
	I0624 03:32:15.508125    6914 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem -> 51362.pem in /etc/ssl/certs
	I0624 03:32:15.508226    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 03:32:15.511002    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:15.517593    6914 start.go:296] duration metric: took 45.572625ms for postStartSetup
	I0624 03:32:15.517609    6914 fix.go:56] duration metric: took 20.381250875s for fixHost
	I0624 03:32:15.517640    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.517739    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.517743    6914 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:32:15.574829    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225135.306067545
	
	I0624 03:32:15.574838    6914 fix.go:216] guest clock: 1719225135.306067545
	I0624 03:32:15.574842    6914 fix.go:229] Guest: 2024-06-24 03:32:15.306067545 -0700 PDT Remote: 2024-06-24 03:32:15.517612 -0700 PDT m=+20.489537709 (delta=-211.544455ms)
	I0624 03:32:15.574852    6914 fix.go:200] guest clock delta is within tolerance: -211.544455ms
	I0624 03:32:15.574855    6914 start.go:83] releasing machines lock for "stopped-upgrade-252000", held for 20.438509417s
	I0624 03:32:15.574919    6914 ssh_runner.go:195] Run: cat /version.json
	I0624 03:32:15.574923    6914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:32:15.574927    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:15.574942    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	W0624 03:32:15.575597    6914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51290->127.0.0.1:51107: write: broken pipe
	I0624 03:32:15.575614    6914 retry.go:31] will retry after 166.471983ms: ssh: handshake failed: write tcp 127.0.0.1:51290->127.0.0.1:51107: write: broken pipe
	W0624 03:32:15.773148    6914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0624 03:32:15.773200    6914 ssh_runner.go:195] Run: systemctl --version
	I0624 03:32:15.774983    6914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 03:32:15.776797    6914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:32:15.776836    6914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0624 03:32:15.779940    6914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0624 03:32:15.784589    6914 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 03:32:15.784649    6914 start.go:494] detecting cgroup driver to use...
	I0624 03:32:15.784790    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:15.792347    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0624 03:32:15.795694    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:32:15.799059    6914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:32:15.799083    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:32:15.802444    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:15.805279    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:32:15.808184    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:15.811492    6914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:32:15.814664    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:32:15.817614    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:32:15.820305    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:32:15.823563    6914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:32:15.827115    6914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:32:15.830130    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:15.892646    6914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:32:15.903467    6914 start.go:494] detecting cgroup driver to use...
	I0624 03:32:15.903550    6914 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:32:15.911126    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:15.915953    6914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:32:15.921862    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:15.926459    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:15.931312    6914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 03:32:15.955212    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:15.960746    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:15.966537    6914 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:32:15.967605    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:32:15.970698    6914 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:32:15.975728    6914 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:32:16.041735    6914 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:32:16.109713    6914 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.109783    6914 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:32:16.115778    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.190608    6914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:17.316564    6914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.125949792s)
	I0624 03:32:17.316619    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 03:32:17.324072    6914 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0624 03:32:17.333012    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:17.337711    6914 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 03:32:17.398772    6914 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 03:32:17.463600    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:17.549330    6914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 03:32:17.554696    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:17.559138    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:17.623722    6914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 03:32:17.661541    6914 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 03:32:17.661617    6914 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 03:32:17.663634    6914 start.go:562] Will wait 60s for crictl version
	I0624 03:32:17.663678    6914 ssh_runner.go:195] Run: which crictl
	I0624 03:32:17.664881    6914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 03:32:17.679281    6914 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0624 03:32:17.679351    6914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:17.695995    6914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:15.586912    6932 machine.go:94] provisionDockerMachine start ...
	I0624 03:32:15.586959    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.587072    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.587077    6932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:32:15.643689    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-398000
	
	I0624 03:32:15.643705    6932 buildroot.go:166] provisioning hostname "running-upgrade-398000"
	I0624 03:32:15.643747    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.643862    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.643867    6932 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-398000 && echo "running-upgrade-398000" | sudo tee /etc/hostname
	I0624 03:32:15.707421    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-398000
	
	I0624 03:32:15.707475    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.707604    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.707611    6932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-398000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-398000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-398000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:32:15.764120    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:15.764135    6932 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19124-4612/.minikube CaCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19124-4612/.minikube}
	I0624 03:32:15.764148    6932 buildroot.go:174] setting up certificates
	I0624 03:32:15.764152    6932 provision.go:84] configureAuth start
	I0624 03:32:15.764156    6932 provision.go:143] copyHostCerts
	I0624 03:32:15.764224    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem, removing ...
	I0624 03:32:15.764233    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem
	I0624 03:32:15.764346    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem (1082 bytes)
	I0624 03:32:15.764538    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem, removing ...
	I0624 03:32:15.764542    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem
	I0624 03:32:15.764582    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem (1123 bytes)
	I0624 03:32:15.764680    6932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem, removing ...
	I0624 03:32:15.764684    6932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem
	I0624 03:32:15.764718    6932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem (1679 bytes)
	I0624 03:32:15.764810    6932 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-398000 san=[127.0.0.1 localhost minikube running-upgrade-398000]
	I0624 03:32:15.842744    6932 provision.go:177] copyRemoteCerts
	I0624 03:32:15.842774    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:32:15.842783    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:15.873285    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0624 03:32:15.880441    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0624 03:32:15.887829    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 03:32:15.896056    6932 provision.go:87] duration metric: took 131.892875ms to configureAuth
	I0624 03:32:15.896069    6932 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:32:15.896186    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:15.896224    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.896315    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.896320    6932 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:32:15.956553    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:32:15.956562    6932 buildroot.go:70] root file system type: tmpfs
	I0624 03:32:15.956619    6932 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:32:15.956665    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.956781    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:15.956816    6932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:32:16.018704    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:32:16.018761    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:16.018889    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:16.018898    6932 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:32:16.079785    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:16.079796    6932 machine.go:97] duration metric: took 492.882333ms to provisionDockerMachine
	I0624 03:32:16.079802    6932 start.go:293] postStartSetup for "running-upgrade-398000" (driver="qemu2")
	I0624 03:32:16.079808    6932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:32:16.079906    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:32:16.079917    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:16.113893    6932 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:32:16.115378    6932 info.go:137] Remote host: Buildroot 2021.02.12
	I0624 03:32:16.115386    6932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/addons for local assets ...
	I0624 03:32:16.115475    6932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/files for local assets ...
	I0624 03:32:16.115567    6932 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem -> 51362.pem in /etc/ssl/certs
	I0624 03:32:16.115662    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 03:32:16.119178    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:16.126317    6932 start.go:296] duration metric: took 46.50875ms for postStartSetup
	I0624 03:32:16.126332    6932 fix.go:56] duration metric: took 551.423416ms for fixHost
	I0624 03:32:16.126373    6932 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:16.126498    6932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d2900] 0x1051d5160 <nil>  [] 0s} localhost 51144 <nil> <nil>}
	I0624 03:32:16.126502    6932 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:32:16.184479    6932 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225136.521207639
	
	I0624 03:32:16.184489    6932 fix.go:216] guest clock: 1719225136.521207639
	I0624 03:32:16.184493    6932 fix.go:229] Guest: 2024-06-24 03:32:16.521207639 -0700 PDT Remote: 2024-06-24 03:32:16.126334 -0700 PDT m=+2.054686460 (delta=394.873639ms)
	I0624 03:32:16.184505    6932 fix.go:200] guest clock delta is within tolerance: 394.873639ms
	I0624 03:32:16.184507    6932 start.go:83] releasing machines lock for "running-upgrade-398000", held for 609.612708ms
	I0624 03:32:16.184572    6932 ssh_runner.go:195] Run: cat /version.json
	I0624 03:32:16.184579    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:32:16.184597    6932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:32:16.184624    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	W0624 03:32:16.185224    6932 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51144: connect: connection refused
	I0624 03:32:16.185242    6932 retry.go:31] will retry after 182.040143ms: dial tcp [::1]:51144: connect: connection refused
	W0624 03:32:16.400640    6932 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0624 03:32:16.400714    6932 ssh_runner.go:195] Run: systemctl --version
	I0624 03:32:16.402584    6932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 03:32:16.404128    6932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:32:16.404153    6932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0624 03:32:16.407252    6932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0624 03:32:16.411484    6932 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 03:32:16.411491    6932 start.go:494] detecting cgroup driver to use...
	I0624 03:32:16.411566    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:16.416750    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0624 03:32:16.420238    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:32:16.423327    6932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.423349    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:32:16.426457    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:16.429164    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:32:16.432328    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:16.435372    6932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:32:16.438365    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:32:16.441262    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:32:16.444572    6932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:32:16.447648    6932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:32:16.450430    6932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:32:16.452931    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.553019    6932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:32:16.559321    6932 start.go:494] detecting cgroup driver to use...
	I0624 03:32:16.559384    6932 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:32:16.567665    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:16.572802    6932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:32:16.584041    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:16.589334    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:16.594185    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:16.599444    6932 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:32:16.600660    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:32:16.603770    6932 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:32:16.608804    6932 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:32:16.698957    6932 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:32:16.787947    6932 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.788017    6932 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:32:16.793306    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.877822    6932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:17.714748    6914 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0624 03:32:17.714817    6914 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0624 03:32:17.716106    6914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:32:17.720108    6914 kubeadm.go:877] updating cluster {Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0624 03:32:17.720155    6914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:17.720195    6914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:17.732741    6914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:17.732759    6914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:17.732814    6914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:17.736446    6914 ssh_runner.go:195] Run: which lz4
	I0624 03:32:17.737912    6914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 03:32:17.739271    6914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 03:32:17.739294    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0624 03:32:18.500026    6914 docker.go:649] duration metric: took 762.156167ms to copy over tarball
	I0624 03:32:18.500090    6914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 03:32:19.651728    6914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.151635042s)
	I0624 03:32:19.651741    6914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 03:32:19.667842    6914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:19.671372    6914 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0624 03:32:19.676729    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:19.757857    6914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:21.437481    6914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.679622541s)
	I0624 03:32:21.437572    6914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:21.452810    6914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:21.452826    6914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:21.452831    6914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0624 03:32:21.459128    6914 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0624 03:32:21.459156    6914 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:21.459183    6914 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:21.459190    6914 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:21.459220    6914 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:21.459234    6914 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:21.459251    6914 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:21.459354    6914 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:21.467578    6914 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:21.467727    6914 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:21.468272    6914 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:21.468904    6914 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:21.469014    6914 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:21.469062    6914 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0624 03:32:21.469105    6914 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:21.469125    6914 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	W0624 03:32:22.348868    6914 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:22.349064    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0624 03:32:22.349318    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.383735    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.392715    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.395839    6914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0624 03:32:22.395871    6914 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0624 03:32:22.395885    6914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0624 03:32:22.395909    6914 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.395923    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0624 03:32:22.395949    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.424019    6914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0624 03:32:22.424046    6914 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.424069    6914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0624 03:32:22.424084    6914 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.424107    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.424181    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.429587    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0624 03:32:22.429717    6914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:22.432794    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0624 03:32:22.432890    6914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	W0624 03:32:22.435403    6914 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:22.435504    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.450886    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0624 03:32:22.450906    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0624 03:32:22.450888    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0624 03:32:22.450922    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0624 03:32:22.450955    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0624 03:32:22.450967    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0624 03:32:22.470525    6914 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0624 03:32:22.470537    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0624 03:32:22.474619    6914 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0624 03:32:22.474639    6914 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.474697    6914 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.502868    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.504879    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.509405    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.523741    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0624 03:32:22.529206    6914 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:22.529222    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0624 03:32:22.530097    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0624 03:32:22.530212    6914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:22.537097    6914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0624 03:32:22.537117    6914 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.537175    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.551421    6914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0624 03:32:22.551442    6914 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.551502    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.551895    6914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0624 03:32:22.551905    6914 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.551928    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.603225    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0624 03:32:22.603264    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0624 03:32:22.603292    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0624 03:32:22.603297    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0624 03:32:22.603333    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0624 03:32:22.603384    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0624 03:32:22.627591    6914 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:22.627609    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0624 03:32:22.864905    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0624 03:32:22.864944    6914 cache_images.go:92] duration metric: took 1.412118708s to LoadCachedImages
	W0624 03:32:22.864984    6914 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0624 03:32:22.864990    6914 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0624 03:32:22.865054    6914 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-252000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 03:32:22.865123    6914 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 03:32:22.878329    6914 cni.go:84] Creating CNI manager for ""
	I0624 03:32:22.878340    6914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:22.878358    6914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 03:32:22.878369    6914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-252000 NodeName:stopped-upgrade-252000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 03:32:22.878427    6914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-252000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 03:32:22.878494    6914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0624 03:32:22.881490    6914 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 03:32:22.881517    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 03:32:22.884092    6914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0624 03:32:22.889213    6914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 03:32:22.893841    6914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0624 03:32:22.899049    6914 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0624 03:32:22.900332    6914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:32:22.908120    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:22.988978    6914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:32:22.994248    6914 certs.go:68] Setting up /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000 for IP: 10.0.2.15
	I0624 03:32:22.994256    6914 certs.go:194] generating shared ca certs ...
	I0624 03:32:22.994264    6914 certs.go:226] acquiring lock for ca certs: {Name:mk1070bf28491713fa565ef6662c76d5a9260883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:22.994489    6914 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key
	I0624 03:32:22.994530    6914 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key
	I0624 03:32:22.994535    6914 certs.go:256] generating profile certs ...
	I0624 03:32:22.994593    6914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key
	I0624 03:32:22.994605    6914 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750
	I0624 03:32:22.994616    6914 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0624 03:32:23.111511    6914 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 ...
	I0624 03:32:23.111530    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750: {Name:mkbecaa613f108e08abc6698a40ff590b13932c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.111855    6914 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750 ...
	I0624 03:32:23.111861    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750: {Name:mk0aec9a80b71fdacc6fd00e84a498bf758d161c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.112001    6914 certs.go:381] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt
	I0624 03:32:23.112139    6914 certs.go:385] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key
	I0624 03:32:23.112315    6914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.key
	I0624 03:32:23.112439    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem (1338 bytes)
	W0624 03:32:23.112460    6914 certs.go:480] ignoring /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136_empty.pem, impossibly tiny 0 bytes
	I0624 03:32:23.112465    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem (1675 bytes)
	I0624 03:32:23.112485    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem (1082 bytes)
	I0624 03:32:23.112502    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem (1123 bytes)
	I0624 03:32:23.112521    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem (1679 bytes)
	I0624 03:32:23.112558    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:23.112924    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 03:32:23.119715    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 03:32:23.126603    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 03:32:23.133514    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 03:32:23.140152    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0624 03:32:23.147361    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0624 03:32:23.154677    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 03:32:23.161751    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0624 03:32:23.168154    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 03:32:23.175163    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem --> /usr/share/ca-certificates/5136.pem (1338 bytes)
	I0624 03:32:23.181987    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /usr/share/ca-certificates/51362.pem (1708 bytes)
	I0624 03:32:23.188429    6914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 03:32:23.194046    6914 ssh_runner.go:195] Run: openssl version
	I0624 03:32:23.195862    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 03:32:23.199433    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.201038    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.201056    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.202689    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 03:32:23.205485    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5136.pem && ln -fs /usr/share/ca-certificates/5136.pem /etc/ssl/certs/5136.pem"
	I0624 03:32:23.208367    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.209855    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:19 /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.209874    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.211619    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5136.pem /etc/ssl/certs/51391683.0"
	I0624 03:32:23.214966    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51362.pem && ln -fs /usr/share/ca-certificates/51362.pem /etc/ssl/certs/51362.pem"
	I0624 03:32:23.217939    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.219310    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:19 /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.219328    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.221189    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51362.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 03:32:23.224254    6914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 03:32:23.225872    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 03:32:23.227826    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 03:32:23.229665    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 03:32:23.231560    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 03:32:23.233502    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 03:32:23.235259    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 03:32:23.237151    6914 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:23.237224    6914 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:23.248237    6914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0624 03:32:23.251409    6914 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 03:32:23.251415    6914 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 03:32:23.251421    6914 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 03:32:23.251442    6914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 03:32:23.254508    6914 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:23.254554    6914 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-252000" does not appear in /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:23.254572    6914 kubeconfig.go:62] /Users/jenkins/minikube-integration/19124-4612/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-252000" cluster setting kubeconfig missing "stopped-upgrade-252000" context setting]
	I0624 03:32:23.254748    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.255391    6914 kapi.go:59] client config for stopped-upgrade-252000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10210ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:32:23.256225    6914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 03:32:23.258961    6914 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-252000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0624 03:32:23.258967    6914 kubeadm.go:1154] stopping kube-system containers ...
	I0624 03:32:23.259007    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:23.269397    6914 docker.go:483] Stopping containers: [f1ef49ef3795 05c0542721e3 f481d5c8ca3d 335d1abf4b16 bf89cebed9fa 54abcad50314 a77b085de8ed d0208fca4534]
	I0624 03:32:23.269461    6914 ssh_runner.go:195] Run: docker stop f1ef49ef3795 05c0542721e3 f481d5c8ca3d 335d1abf4b16 bf89cebed9fa 54abcad50314 a77b085de8ed d0208fca4534
	I0624 03:32:23.279646    6914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 03:32:23.285599    6914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:32:23.288262    6914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:32:23.288267    6914 kubeadm.go:156] found existing configuration files:
	
	I0624 03:32:23.288286    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf
	I0624 03:32:23.291294    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:32:23.291311    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:32:23.294134    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf
	I0624 03:32:23.296475    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:32:23.296490    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:32:23.299568    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf
	I0624 03:32:23.302471    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:32:23.302494    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:32:23.304903    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf
	I0624 03:32:23.307635    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:32:23.307653    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:32:23.310642    6914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:32:23.313222    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:23.335961    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:23.928951    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.037940    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.069487    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.092567    6914 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:32:24.092642    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:24.594431    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:25.094741    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:25.099377    6914 api_server.go:72] duration metric: took 1.006820167s to wait for apiserver process to appear ...
	I0624 03:32:25.099388    6914 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:32:25.099397    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:30.252657    6932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.374935375s)
	I0624 03:32:30.252716    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 03:32:30.257966    6932 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0624 03:32:30.267087    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:30.271838    6932 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 03:32:30.354296    6932 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 03:32:30.438590    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:30.521192    6932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 03:32:30.527624    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:30.532195    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:30.621373    6932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 03:32:30.660382    6932 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 03:32:30.660440    6932 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 03:32:30.663391    6932 start.go:562] Will wait 60s for crictl version
	I0624 03:32:30.663443    6932 ssh_runner.go:195] Run: which crictl
	I0624 03:32:30.664767    6932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 03:32:30.676929    6932 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0624 03:32:30.676994    6932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:30.689998    6932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:30.707997    6932 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0624 03:32:30.708064    6932 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0624 03:32:30.709437    6932 kubeadm.go:877] updating cluster {Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0624 03:32:30.709487    6932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:30.709528    6932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:30.720324    6932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:30.720332    6932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:30.720374    6932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:30.723238    6932 ssh_runner.go:195] Run: which lz4
	I0624 03:32:30.724616    6932 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 03:32:30.725739    6932 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 03:32:30.725749    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0624 03:32:31.526684    6932 docker.go:649] duration metric: took 802.104333ms to copy over tarball
	I0624 03:32:31.526757    6932 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 03:32:32.840437    6932 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.313678125s)
	I0624 03:32:32.840450    6932 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 03:32:32.856097    6932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:32.859208    6932 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0624 03:32:32.864784    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:32.942133    6932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:30.101464    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:30.101498    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:34.170457    6932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.228314s)
	I0624 03:32:34.170576    6932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:34.183251    6932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:34.183277    6932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:34.183286    6932 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0624 03:32:34.189682    6932 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:34.189684    6932 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:34.189753    6932 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:34.189839    6932 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0624 03:32:34.189909    6932 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:34.189926    6932 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:34.189930    6932 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:34.190112    6932 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:34.199623    6932 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:34.199705    6932 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0624 03:32:34.199913    6932 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:34.199915    6932 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:34.200199    6932 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:34.200198    6932 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:34.200336    6932 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:34.200601    6932 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0624 03:32:35.001606    6932 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:35.002030    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.030712    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.032232    6932 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0624 03:32:35.032269    6932 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.032320    6932 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:35.050702    6932 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0624 03:32:35.050729    6932 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.050811    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:35.062883    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0624 03:32:35.063009    6932 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:35.071733    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0624 03:32:35.071749    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0624 03:32:35.071766    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0624 03:32:35.073658    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.084610    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.101011    6932 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0624 03:32:35.101034    6932 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.101088    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:35.112185    6932 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:35.112198    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0624 03:32:35.116451    6932 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0624 03:32:35.116471    6932 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.116523    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:35.118154    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0624 03:32:35.125257    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0624 03:32:35.236886    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0624 03:32:35.243876    6932 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:35.243914    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.244088    6932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.387328    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0624 03:32:35.387373    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0624 03:32:35.387405    6932 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0624 03:32:35.387423    6932 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0624 03:32:35.387435    6932 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0624 03:32:35.387444    6932 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:35.387471    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0624 03:32:35.387471    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:35.387499    6932 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0624 03:32:35.387505    6932 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0624 03:32:35.387512    6932 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.387512    6932 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.387537    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:35.387551    6932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:35.410630    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0624 03:32:35.410641    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0624 03:32:35.410763    6932 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0624 03:32:35.423483    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0624 03:32:35.423542    6932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0624 03:32:35.423551    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0624 03:32:35.423561    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0624 03:32:35.423608    6932 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:35.425287    6932 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0624 03:32:35.425297    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0624 03:32:35.436210    6932 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0624 03:32:35.436226    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0624 03:32:35.487806    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0624 03:32:35.487832    6932 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:35.487843    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0624 03:32:35.523035    6932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0624 03:32:35.523072    6932 cache_images.go:92] duration metric: took 1.339790833s to LoadCachedImages
	W0624 03:32:35.523116    6932 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0624 03:32:35.523124    6932 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0624 03:32:35.523180    6932 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-398000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 03:32:35.523242    6932 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 03:32:35.536789    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:32:35.536805    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:35.536813    6932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 03:32:35.536822    6932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-398000 NodeName:running-upgrade-398000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 03:32:35.536890    6932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-398000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 03:32:35.536948    6932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0624 03:32:35.539922    6932 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 03:32:35.539962    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 03:32:35.542777    6932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0624 03:32:35.547777    6932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 03:32:35.552347    6932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0624 03:32:35.557363    6932 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0624 03:32:35.558472    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:35.642821    6932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:32:35.647650    6932 certs.go:68] Setting up /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000 for IP: 10.0.2.15
	I0624 03:32:35.647655    6932 certs.go:194] generating shared ca certs ...
	I0624 03:32:35.647662    6932 certs.go:226] acquiring lock for ca certs: {Name:mk1070bf28491713fa565ef6662c76d5a9260883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.647824    6932 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key
	I0624 03:32:35.647880    6932 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key
	I0624 03:32:35.647885    6932 certs.go:256] generating profile certs ...
	I0624 03:32:35.647957    6932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key
	I0624 03:32:35.647976    6932 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615
	I0624 03:32:35.647990    6932 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0624 03:32:35.748513    6932 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 ...
	I0624 03:32:35.748528    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615: {Name:mk7cb03054a669937a45b7bb1f7d8fe1bc07de87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.748813    6932 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615 ...
	I0624 03:32:35.748817    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615: {Name:mk9c29e898cba469e6a986fd7743e831a225721e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.748957    6932 certs.go:381] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt.8ad61615 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt
	I0624 03:32:35.749092    6932 certs.go:385] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key.8ad61615 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key
	I0624 03:32:35.749243    6932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.key
	I0624 03:32:35.749373    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem (1338 bytes)
	W0624 03:32:35.749403    6932 certs.go:480] ignoring /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136_empty.pem, impossibly tiny 0 bytes
	I0624 03:32:35.749409    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem (1675 bytes)
	I0624 03:32:35.749440    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem (1082 bytes)
	I0624 03:32:35.749467    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem (1123 bytes)
	I0624 03:32:35.749493    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem (1679 bytes)
	I0624 03:32:35.749545    6932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:35.749887    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 03:32:35.757296    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 03:32:35.764166    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 03:32:35.770524    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 03:32:35.777673    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0624 03:32:35.784846    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0624 03:32:35.791724    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 03:32:35.798645    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 03:32:35.805912    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /usr/share/ca-certificates/51362.pem (1708 bytes)
	I0624 03:32:35.813133    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 03:32:35.819781    6932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem --> /usr/share/ca-certificates/5136.pem (1338 bytes)
	I0624 03:32:35.826807    6932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 03:32:35.831834    6932 ssh_runner.go:195] Run: openssl version
	I0624 03:32:35.833782    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51362.pem && ln -fs /usr/share/ca-certificates/51362.pem /etc/ssl/certs/51362.pem"
	I0624 03:32:35.836895    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.838160    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:19 /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.838183    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51362.pem
	I0624 03:32:35.840095    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51362.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 03:32:35.843009    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 03:32:35.846593    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.848535    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.848559    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:35.850526    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 03:32:35.853909    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5136.pem && ln -fs /usr/share/ca-certificates/5136.pem /etc/ssl/certs/5136.pem"
	I0624 03:32:35.857268    6932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.858620    6932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:19 /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.858639    6932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5136.pem
	I0624 03:32:35.860632    6932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5136.pem /etc/ssl/certs/51391683.0"
	I0624 03:32:35.863447    6932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 03:32:35.865067    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 03:32:35.866764    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 03:32:35.868825    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 03:32:35.870579    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 03:32:35.872785    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 03:32:35.874470    6932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 03:32:35.876093    6932 kubeadm.go:391] StartCluster: {Name:running-upgrade-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51210 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:35.876159    6932 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:35.886374    6932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0624 03:32:35.890510    6932 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 03:32:35.890518    6932 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 03:32:35.890525    6932 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 03:32:35.890547    6932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 03:32:35.893390    6932 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:35.893692    6932 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-398000" does not appear in /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:35.893793    6932 kubeconfig.go:62] /Users/jenkins/minikube-integration/19124-4612/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-398000" cluster setting kubeconfig missing "running-upgrade-398000" context setting]
	I0624 03:32:35.894023    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:35.894473    6932 kapi.go:59] client config for running-upgrade-398000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10655ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:32:35.894791    6932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 03:32:35.897605    6932 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-398000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0624 03:32:35.897611    6932 kubeadm.go:1154] stopping kube-system containers ...
	I0624 03:32:35.897649    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:35.908738    6932 docker.go:483] Stopping containers: [1fe49719b853 97bd8b01ebb9 46cc05d81f82 aae9a727b1ef 5e68f03fc08d ff24041fb2ac f0f772cfc12e 1ebbfbc68569 318a5cc223b5 b8559e67098a 802c3a1e9cad b62fd1734dff fc34224f55d0 dac2a23ff62a 1300b36c45bd 091967d291c6]
	I0624 03:32:35.908796    6932 ssh_runner.go:195] Run: docker stop 1fe49719b853 97bd8b01ebb9 46cc05d81f82 aae9a727b1ef 5e68f03fc08d ff24041fb2ac f0f772cfc12e 1ebbfbc68569 318a5cc223b5 b8559e67098a 802c3a1e9cad b62fd1734dff fc34224f55d0 dac2a23ff62a 1300b36c45bd 091967d291c6
	I0624 03:32:35.919936    6932 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 03:32:36.012904    6932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:32:36.016950    6932 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Jun 24 10:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jun 24 10:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jun 24 10:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 24 10:32 /etc/kubernetes/scheduler.conf
	
	I0624 03:32:36.016985    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf
	I0624 03:32:36.020526    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.020553    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:32:36.023951    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf
	I0624 03:32:36.027155    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.027181    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:32:36.029737    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf
	I0624 03:32:36.032669    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.032686    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:32:36.035964    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf
	I0624 03:32:36.038649    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:36.038672    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:32:36.041261    6932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:32:36.044620    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.066492    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.486726    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.730288    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.751816    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:36.772713    6932 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:32:36.772785    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:37.275116    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:37.775093    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:38.274960    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:38.774830    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:35.101649    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:35.101662    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:39.274826    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:39.279170    6932 api_server.go:72] duration metric: took 2.506481167s to wait for apiserver process to appear ...
	I0624 03:32:39.279179    6932 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:32:39.279187    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:40.101903    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:40.101946    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:44.281253    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:44.281304    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:45.102681    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:45.102733    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:49.281517    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:49.281559    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:50.103385    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:50.103432    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:54.281933    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:54.281959    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:55.104292    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:55.104359    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:59.282339    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:59.282363    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:00.104852    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:00.104873    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:04.282862    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:04.282906    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:05.105985    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:05.106045    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:09.283613    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:09.283638    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:10.107018    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:10.107052    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:14.284537    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:14.284579    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:15.108829    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:15.108888    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:19.285745    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:19.285875    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:20.110063    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:20.110128    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:24.287567    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:24.287627    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:25.112550    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:25.112806    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:25.137621    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:25.137746    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:25.155040    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:25.155124    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:25.168606    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:25.168669    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:25.179813    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:25.179874    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:25.190443    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:25.190512    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:25.200977    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:25.201041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:25.214930    6914 logs.go:276] 0 containers: []
	W0624 03:33:25.214941    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:25.215006    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:25.225344    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:25.225360    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:25.225365    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:25.239817    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:25.239831    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:25.253235    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:25.253254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:25.267579    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:25.267593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:25.282118    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:25.282128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:25.296118    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:25.296128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:25.307230    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:25.307240    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:25.343899    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:25.343910    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:25.457147    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:25.457161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:25.483649    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:25.483659    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:25.496228    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:25.496239    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:25.513987    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:25.513997    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:25.525331    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:25.525342    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:25.529409    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:25.529414    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:25.543158    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:25.543167    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:25.554653    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:25.554664    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:25.565369    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:25.565380    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:28.091868    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:29.289619    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:29.289643    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:33.092884    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:33.093124    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:33.115816    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:33.115923    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:33.132413    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:33.132493    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:33.145195    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:33.145262    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:33.156809    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:33.156885    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:33.171049    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:33.171118    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:33.184349    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:33.184406    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:33.195355    6914 logs.go:276] 0 containers: []
	W0624 03:33:33.195369    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:33.195449    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:33.206067    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:33.206083    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:33.206088    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:33.233604    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:33.233616    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:33.250614    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:33.250625    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:33.262253    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:33.262263    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:33.274402    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:33.274411    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:33.286629    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:33.286638    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:33.323001    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:33.323014    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:33.327185    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:33.327194    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:33.365283    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:33.365294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:33.379053    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:33.379064    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:33.391091    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:33.391102    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:33.405230    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:33.405244    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:33.423835    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:33.423845    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:33.437864    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:33.437876    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:33.454635    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:33.454647    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:33.472352    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:33.472362    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:33.496803    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:33.496811    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:34.291763    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:34.291789    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:36.010745    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:39.293950    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:39.294165    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:39.311549    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:39.311632    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:39.324586    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:39.324654    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:39.336109    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:39.336174    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:39.346084    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:39.346142    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:39.358239    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:39.358319    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:39.372045    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:39.372112    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:39.382882    6932 logs.go:276] 0 containers: []
	W0624 03:33:39.382894    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:39.382952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:39.393111    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:39.393127    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:39.393132    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:39.404452    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:39.404463    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:39.421938    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:39.421948    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:39.436798    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:39.436810    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:39.450550    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:39.450563    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:39.574684    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:39.574696    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:39.588026    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:39.588039    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:39.599670    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:39.599679    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:39.611907    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:39.611919    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:39.626417    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:39.626427    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:39.642806    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:39.642819    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:39.654304    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:39.654315    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:39.665851    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:39.665862    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:39.704614    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:39.704623    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:39.709365    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:39.709371    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:39.734286    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:39.734293    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:39.748491    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:39.748500    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:42.264700    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:41.013182    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:41.013482    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:41.045521    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:41.045648    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:41.064606    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:41.064703    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:41.078589    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:41.078668    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:41.090060    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:41.090142    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:41.101016    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:41.101080    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:41.113305    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:41.113366    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:41.123379    6914 logs.go:276] 0 containers: []
	W0624 03:33:41.123390    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:41.123446    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:41.134158    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:41.134207    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:41.134215    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:41.138236    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:41.138246    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:41.152675    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:41.152684    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:41.165478    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:41.165489    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:41.179232    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:41.179241    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:41.204249    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:41.204260    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:41.218588    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:41.218599    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:41.235924    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:41.235933    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:41.247337    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:41.247348    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:41.284870    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:41.284879    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:41.299368    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:41.299379    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:41.311161    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:41.311173    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:41.335727    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:41.335737    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:41.347540    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:41.347550    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:41.386129    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:41.386143    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:41.400797    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:41.400811    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:41.415040    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:41.415054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:43.929140    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:47.266963    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:47.267150    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:47.279858    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:47.279936    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:47.290733    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:47.290802    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:47.301011    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:47.301090    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:47.312146    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:47.312214    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:47.323361    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:47.323431    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:47.333598    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:47.333657    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:47.343716    6932 logs.go:276] 0 containers: []
	W0624 03:33:47.343728    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:47.343787    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:47.354443    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:47.354462    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:47.354468    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:47.395244    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:47.395267    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:47.420786    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:47.420799    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:47.436307    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:47.436318    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:47.447722    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:47.447733    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:47.461935    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:47.461948    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:47.479835    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:47.479847    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:47.491065    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:47.491078    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:47.516513    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:47.516530    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:47.529005    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:47.529017    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:47.545423    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:47.545434    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:47.557807    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:47.557823    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:47.569122    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:47.569134    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:47.573973    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:47.573982    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:47.610958    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:47.610968    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:47.623249    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:47.623263    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:47.637706    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:47.637717    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:48.931514    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:48.931786    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:48.963952    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:48.964060    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:48.979053    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:48.979120    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:48.991640    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:48.991714    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:49.002736    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:49.002805    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:49.016511    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:49.016584    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:49.027043    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:49.027111    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:49.037431    6914 logs.go:276] 0 containers: []
	W0624 03:33:49.037441    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:49.037497    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:49.051637    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:49.051657    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:49.051663    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:49.074209    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:49.074220    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:49.088080    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:49.088091    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:49.101529    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:49.101540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:49.118578    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:49.118588    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:49.153519    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:49.153530    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:49.167782    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:49.167793    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:49.178779    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:49.178790    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:49.190872    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:49.190882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:49.208475    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:49.208485    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:49.231526    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:49.231535    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:49.235463    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:49.235469    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:49.269098    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:49.269109    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:49.283702    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:49.283712    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:49.298124    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:49.298135    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:49.309698    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:49.309709    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:49.324595    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:49.324605    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:50.152406    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:51.864668    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:55.154765    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:55.155127    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:55.185636    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:33:55.185782    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:55.203540    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:33:55.203623    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:55.216326    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:33:55.216402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:55.228108    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:33:55.228172    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:55.238801    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:33:55.238861    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:55.249447    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:33:55.249506    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:55.260127    6932 logs.go:276] 0 containers: []
	W0624 03:33:55.260140    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:55.260207    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:55.270640    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:33:55.270658    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:55.270664    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:55.307893    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:33:55.307902    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:33:55.321214    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:33:55.321229    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:33:55.334086    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:33:55.334101    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:33:55.348870    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:33:55.348880    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:33:55.360958    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:33:55.360972    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:33:55.376177    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:33:55.376194    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:33:55.387354    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:33:55.387365    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:33:55.399737    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:55.399751    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:55.424663    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:55.424671    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:55.465616    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:33:55.465631    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:33:55.478333    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:33:55.478349    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:33:55.491785    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:33:55.491798    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:55.504817    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:55.504830    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:55.509482    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:33:55.509489    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:33:55.526095    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:33:55.526110    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:33:55.540197    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:33:55.540212    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:33:58.060266    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:56.866951    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:56.867295    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:56.898171    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:56.898301    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:56.917359    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:56.917457    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:56.932517    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:56.932596    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:56.944564    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:56.944633    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:56.956487    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:56.956553    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:56.967189    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:56.967258    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:56.977389    6914 logs.go:276] 0 containers: []
	W0624 03:33:56.977405    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:56.977461    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:56.988217    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:56.988236    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:56.988242    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:57.012274    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:57.012282    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:57.049109    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:57.049126    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:57.075406    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:57.075415    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:57.089353    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:57.089363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:57.106863    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:57.106878    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:57.118022    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:57.118035    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:57.152923    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:57.152939    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:57.167451    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:57.167461    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:57.181243    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:57.181258    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:57.192893    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:57.192909    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:57.207049    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:57.207059    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:57.219353    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:57.219363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:57.231168    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:57.231178    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:57.235397    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:57.235403    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:57.257978    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:57.257991    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:57.269346    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:57.269357    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:59.782330    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:03.062700    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:03.063106    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:03.095367    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:03.095521    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:03.114059    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:03.114140    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:03.127843    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:03.127915    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:03.139459    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:03.139531    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:03.151512    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:03.151575    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:03.162447    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:03.162510    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:03.174926    6932 logs.go:276] 0 containers: []
	W0624 03:34:03.174938    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:03.174993    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:03.189483    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:03.189500    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:03.189506    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:03.201293    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:03.201306    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:03.217126    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:03.217136    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:03.229093    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:03.229105    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:03.254373    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:03.254383    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:03.258577    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:03.258584    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:03.274561    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:03.274572    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:03.294042    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:03.294052    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:03.305822    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:03.305834    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:03.340338    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:03.340349    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:03.352833    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:03.352844    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:03.366276    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:03.366286    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:03.403117    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:03.403129    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:03.417115    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:03.417126    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:03.428849    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:03.428860    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:03.440671    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:03.440683    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:03.457428    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:03.457438    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:04.784568    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:04.784661    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:04.795393    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:04.795464    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:04.806188    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:04.806266    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:04.826533    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:04.826604    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:04.836962    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:04.837031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:04.847686    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:04.847756    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:04.858084    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:04.858153    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:04.871279    6914 logs.go:276] 0 containers: []
	W0624 03:34:04.871290    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:04.871347    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:04.881422    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:04.881439    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:04.881444    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:04.916209    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:04.916220    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:04.930717    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:04.930728    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:04.945293    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:04.945303    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:04.956742    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:04.956753    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:04.968639    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:04.968652    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:04.980830    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:04.980841    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:04.992765    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:04.992776    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:05.029982    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:05.029992    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:05.044520    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:05.044532    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:05.971583    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:05.071709    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:05.071719    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:05.082463    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:05.082473    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:05.099794    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:05.099804    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:05.113625    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:05.113635    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:05.117799    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:05.117807    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:05.131826    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:05.131836    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:05.143429    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:05.143440    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:07.670648    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:10.974135    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:10.974473    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:10.998609    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:10.998725    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:11.014945    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:11.015016    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:11.026842    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:11.026908    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:11.037946    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:11.038027    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:11.048920    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:11.048988    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:11.060046    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:11.060110    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:11.071310    6932 logs.go:276] 0 containers: []
	W0624 03:34:11.071320    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:11.071371    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:11.082650    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:11.082668    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:11.082674    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:11.119551    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:11.119563    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:11.131143    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:11.131155    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:11.135722    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:11.135731    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:11.158297    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:11.158307    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:11.171765    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:11.171774    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:11.189119    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:11.189131    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:11.200862    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:11.200873    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:11.225776    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:11.225784    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:11.238849    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:11.238860    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:11.275833    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:11.275841    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:11.294406    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:11.294417    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:11.311651    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:11.311660    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:11.325202    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:11.325215    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:11.337569    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:11.337579    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:11.348932    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:11.348944    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:11.360431    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:11.360441    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:13.878351    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:12.672932    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:12.673041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:12.690151    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:12.690220    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:12.699988    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:12.700056    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:12.715647    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:12.715706    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:12.726015    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:12.726079    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:12.736034    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:12.736098    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:12.746978    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:12.747039    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:12.761400    6914 logs.go:276] 0 containers: []
	W0624 03:34:12.761414    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:12.761466    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:12.771831    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:12.771849    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:12.771855    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:12.783579    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:12.783590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:12.797564    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:12.797576    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:12.808489    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:12.808500    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:12.819882    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:12.819893    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:12.824102    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:12.824112    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:12.835721    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:12.835731    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:12.861236    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:12.861247    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:12.880000    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:12.880011    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:12.892125    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:12.892135    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:12.903166    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:12.903177    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:12.938922    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:12.938934    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:12.953412    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:12.953427    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:12.969454    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:12.969463    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:12.984442    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:12.984453    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:13.001404    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:13.001412    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:13.025684    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:13.025700    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:18.880669    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:18.880847    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:18.896636    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:18.896707    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:18.911607    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:18.911684    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:18.922805    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:18.922882    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:18.934006    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:18.934075    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:18.944933    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:18.945002    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:18.955537    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:18.955609    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:18.965392    6932 logs.go:276] 0 containers: []
	W0624 03:34:18.965404    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:18.965454    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:18.975671    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:18.975688    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:18.975695    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:18.980404    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:18.980412    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:19.014430    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:19.014442    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:19.030768    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:19.030780    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:19.042857    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:19.042868    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:19.081897    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:19.081908    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:15.565865    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:19.103140    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:19.103149    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:19.114520    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:19.114534    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:19.130803    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:19.130814    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:19.144463    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:19.144474    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:19.161608    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:19.161618    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:19.173765    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:19.173778    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:19.187603    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:19.187613    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:19.200294    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:19.200304    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:19.213328    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:19.213340    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:19.231113    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:19.231122    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:19.242056    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:19.242067    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:21.770168    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:20.567640    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:20.567851    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:20.590164    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:20.590282    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:20.604348    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:20.604422    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:20.616134    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:20.616206    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:20.626831    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:20.626901    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:20.638878    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:20.638943    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:20.649944    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:20.650021    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:20.660275    6914 logs.go:276] 0 containers: []
	W0624 03:34:20.660287    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:20.660344    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:20.671019    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:20.671034    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:20.671041    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:20.683805    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:20.683819    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:20.707394    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:20.707405    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:20.718867    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:20.718882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:20.736903    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:20.736917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:20.751704    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:20.751717    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:20.765615    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:20.765630    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:20.777392    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:20.777401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:20.791425    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:20.791436    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:20.805389    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:20.805400    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:20.816742    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:20.816752    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:20.841176    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:20.841187    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:20.855098    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:20.855107    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:20.891393    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:20.891405    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:20.909430    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:20.909445    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:20.922346    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:20.922358    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:20.961546    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:20.961556    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:23.467831    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:26.772511    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:26.772752    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:26.794563    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:26.794665    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:26.809333    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:26.809400    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:26.821721    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:26.821785    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:26.832298    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:26.832374    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:26.843590    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:26.843662    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:26.854284    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:26.854343    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:26.864357    6932 logs.go:276] 0 containers: []
	W0624 03:34:26.864367    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:26.864418    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:26.874712    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:26.874730    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:26.874739    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:26.899321    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:26.899329    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:26.914574    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:26.914585    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:26.929568    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:26.929578    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:26.943331    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:26.943341    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:26.956935    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:26.956947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:26.970452    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:26.970466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:26.986376    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:26.986386    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:27.003640    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:27.003649    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:27.016883    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:27.016893    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:27.021782    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:27.021790    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:27.034314    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:27.034327    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:27.045606    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:27.045617    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:27.057225    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:27.057236    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:27.079019    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:27.079028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:27.090438    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:27.090449    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:27.129207    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:27.129219    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:28.470570    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:28.471103    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:28.509386    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:28.509547    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:28.530819    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:28.530929    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:28.546917    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:28.546998    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:28.565398    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:28.565466    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:28.576225    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:28.576297    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:28.586804    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:28.586868    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:28.596668    6914 logs.go:276] 0 containers: []
	W0624 03:34:28.596680    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:28.596740    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:28.607483    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:28.607499    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:28.607504    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:28.622303    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:28.622313    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:28.637029    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:28.637040    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:28.652960    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:28.652972    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:28.667005    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:28.667015    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:28.701448    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:28.701459    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:28.726908    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:28.726917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:28.752838    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:28.752853    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:28.767499    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:28.767508    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:28.790504    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:28.790515    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:28.815178    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:28.815185    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:28.819521    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:28.819528    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:28.830979    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:28.830992    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:28.842375    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:28.842385    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:28.854421    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:28.854433    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:28.868568    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:28.868578    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:28.886012    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:28.886022    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:29.667300    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:31.424863    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:34.669952    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:34.670289    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:34.703768    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:34.703926    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:34.723067    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:34.723182    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:34.738183    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:34.738277    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:34.749917    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:34.750001    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:34.760284    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:34.760367    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:34.770797    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:34.770868    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:34.781402    6932 logs.go:276] 0 containers: []
	W0624 03:34:34.781413    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:34.781489    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:34.796071    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:34.796093    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:34.796100    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:34.812369    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:34.812384    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:34.828362    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:34.828376    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:34.841150    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:34.841162    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:34.855716    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:34.855727    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:34.873132    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:34.873143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:34.884475    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:34.884486    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:34.926220    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:34.926231    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:34.940037    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:34.940049    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:34.951303    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:34.951314    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:34.977367    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:34.977378    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:35.016430    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:35.016441    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:35.021324    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:35.021332    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:35.037571    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:35.037583    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:35.049673    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:35.049687    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:35.063622    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:35.063632    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:35.077457    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:35.077469    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:37.591588    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:36.427272    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:36.427580    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:36.462496    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:36.462625    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:36.481274    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:36.481371    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:36.494807    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:36.494883    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:36.506611    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:36.506678    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:36.516972    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:36.517031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:36.527358    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:36.527432    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:36.538058    6914 logs.go:276] 0 containers: []
	W0624 03:34:36.538071    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:36.538132    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:36.548271    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:36.548288    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:36.548293    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:36.562454    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:36.562464    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:36.574135    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:36.574146    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:36.586262    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:36.586276    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:36.600265    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:36.600278    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:36.612154    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:36.612168    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:36.636277    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:36.636285    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:36.640690    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:36.640697    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:36.654205    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:36.654214    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:36.688685    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:36.688698    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:36.714415    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:36.714428    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:36.728281    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:36.728294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:36.740004    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:36.740015    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:36.751262    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:36.751274    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:36.788659    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:36.788668    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:36.805723    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:36.805736    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:36.818162    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:36.818176    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:39.334850    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:42.594306    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:42.594418    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:42.606229    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:42.606321    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:42.617302    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:42.617370    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:42.628182    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:42.628249    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:42.645040    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:42.645108    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:42.656210    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:42.656273    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:42.666498    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:42.666563    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:42.676756    6932 logs.go:276] 0 containers: []
	W0624 03:34:42.676771    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:42.676830    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:42.687720    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:42.687740    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:42.687745    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:42.704798    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:42.704809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:42.721069    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:42.721080    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:42.734565    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:42.734575    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:42.749647    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:42.749658    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:42.761173    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:42.761182    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:42.773211    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:42.773222    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:42.798035    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:42.798044    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:42.810200    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:42.810210    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:42.824077    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:42.824087    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:42.837693    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:42.837703    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:42.849357    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:42.849366    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:42.861676    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:42.861687    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:42.875067    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:42.875079    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:42.888221    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:42.888231    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:42.926810    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:42.926820    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:42.930971    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:42.930978    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:44.337185    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:44.337369    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:44.363307    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:44.363399    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:44.375314    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:44.375389    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:44.387357    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:44.387427    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:44.397725    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:44.397798    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:44.412565    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:44.412631    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:44.423061    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:44.423127    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:44.433248    6914 logs.go:276] 0 containers: []
	W0624 03:34:44.433259    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:44.433316    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:44.443615    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:44.443632    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:44.443637    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:44.458300    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:44.458310    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:44.472659    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:44.472669    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:44.484976    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:44.484987    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:44.499086    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:44.499101    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:44.513607    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:44.513621    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:44.532783    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:44.532798    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:44.544348    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:44.544362    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:44.559603    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:44.559617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:44.571092    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:44.571104    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:44.607865    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:44.607874    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:44.642385    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:44.642396    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:44.656582    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:44.656593    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:44.660731    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:44.660741    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:44.685505    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:44.685519    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:44.701059    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:44.701074    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:44.718531    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:44.718545    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:45.466977    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:47.245143    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:50.469556    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:50.469806    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:50.487767    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:50.487846    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:50.501336    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:50.501415    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:50.512962    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:50.513029    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:50.523739    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:50.523808    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:50.534399    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:50.534460    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:50.545073    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:50.545140    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:50.555146    6932 logs.go:276] 0 containers: []
	W0624 03:34:50.555160    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:50.555214    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:50.569720    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:50.569740    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:50.569746    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:50.581845    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:50.581858    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:50.586263    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:50.586270    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:50.599112    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:50.599125    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:50.613203    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:50.613213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:50.624749    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:50.624760    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:50.636116    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:50.636128    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:50.671953    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:50.671965    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:50.687933    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:50.687947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:50.699891    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:50.699901    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:50.716880    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:50.716894    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:50.741802    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:50.741809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:50.756197    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:50.756210    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:50.768003    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:50.768016    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:50.807060    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:50.807069    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:50.820515    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:50.820525    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:50.833548    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:50.833559    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:53.347777    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:52.247472    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:52.247667    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:52.269341    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:52.269442    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:52.286522    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:52.286603    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:52.304918    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:52.304988    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:52.315046    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:52.315110    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:52.325046    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:52.325105    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:52.335313    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:52.335385    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:52.345339    6914 logs.go:276] 0 containers: []
	W0624 03:34:52.345354    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:52.345414    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:52.356170    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:52.356188    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:52.356193    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:52.368720    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:52.368733    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:52.379786    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:52.379798    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:52.420651    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:52.420662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:52.445572    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:52.445583    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:52.459529    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:52.459540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:52.473387    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:52.473398    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:52.511403    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:52.511411    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:52.528931    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:52.528942    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:52.543769    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:52.543780    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:52.555293    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:52.555306    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:52.567438    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:52.567449    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:52.581430    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:52.581440    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:52.596259    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:52.596269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:52.608111    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:52.608121    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:52.619395    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:52.619405    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:52.644387    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:52.644431    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:58.350404    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:58.350841    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:58.387836    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:34:58.387977    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:58.409477    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:34:58.409597    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:58.424362    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:34:58.424437    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:58.442268    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:34:58.442332    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:58.453493    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:34:58.453570    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:58.464410    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:34:58.464471    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:58.475181    6932 logs.go:276] 0 containers: []
	W0624 03:34:58.475193    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:58.475255    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:58.487218    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:34:58.487234    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:34:58.487239    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:58.499534    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:58.499545    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:58.523403    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:58.523414    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:58.560022    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:34:58.560029    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:34:58.572225    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:34:58.572235    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:34:58.584309    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:34:58.584319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:34:58.602761    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:34:58.602771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:34:58.614875    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:34:58.614886    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:34:58.630347    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:34:58.630357    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:34:58.647407    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:58.647417    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:58.690111    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:34:58.690122    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:34:58.706894    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:34:58.706904    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:34:58.721261    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:34:58.721272    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:34:58.734905    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:34:58.734916    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:34:58.749165    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:58.749176    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:58.753950    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:34:58.753956    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:34:58.765886    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:34:58.765898    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:34:55.150766    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:01.279262    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:00.153031    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:00.153134    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:00.169399    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:00.169470    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:00.180088    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:00.180168    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:00.190847    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:00.190910    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:00.201193    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:00.201265    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:00.211635    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:00.211707    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:00.222232    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:00.222296    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:00.236695    6914 logs.go:276] 0 containers: []
	W0624 03:35:00.236707    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:00.236764    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:00.247094    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:00.247111    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:00.247116    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:00.271931    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:00.271941    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:00.286157    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:00.286168    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:00.300051    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:00.300062    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:00.322559    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:00.322570    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:00.335980    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:00.335989    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:00.347004    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:00.347015    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:00.351182    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:00.351191    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:00.390014    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:00.390024    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:00.404781    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:00.404791    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:00.418045    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:00.418054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:00.429647    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:00.429657    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:00.452813    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:00.452821    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:00.489383    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:00.489401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:00.500299    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:00.500312    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:00.516132    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:00.516142    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:00.528447    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:00.528458    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:03.042183    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:06.279983    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:06.280133    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:06.293246    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:06.293325    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:06.304881    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:06.304950    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:06.315664    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:06.315738    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:06.326090    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:06.326163    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:06.336568    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:06.336633    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:06.347082    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:06.347149    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:06.359156    6932 logs.go:276] 0 containers: []
	W0624 03:35:06.359173    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:06.359230    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:06.370579    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:06.370596    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:06.370601    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:06.382515    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:06.382526    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:06.396154    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:06.396164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:06.413154    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:06.413164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:06.424698    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:06.424708    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:06.435473    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:06.435485    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:06.461204    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:06.461211    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:06.495673    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:06.495684    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:06.509177    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:06.509187    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:06.522036    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:06.522046    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:06.535801    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:06.535811    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:06.549190    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:06.549200    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:06.588180    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:06.588188    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:06.592753    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:06.592759    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:06.608457    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:06.608467    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:06.624043    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:06.624054    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:06.635892    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:06.635908    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:08.044520    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:08.044889    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:08.087722    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:08.087813    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:08.105356    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:08.105435    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:08.119283    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:08.119354    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:08.131119    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:08.131191    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:08.142245    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:08.142305    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:08.153613    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:08.153680    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:08.164367    6914 logs.go:276] 0 containers: []
	W0624 03:35:08.164381    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:08.164443    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:08.180532    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:08.180549    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:08.180554    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:08.218172    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:08.218180    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:08.233769    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:08.233779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:08.245723    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:08.245734    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:08.259610    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:08.259622    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:08.270603    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:08.270614    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:08.282579    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:08.282590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:08.301269    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:08.301279    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:08.326184    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:08.326194    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:08.340277    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:08.340289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:08.359929    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:08.359939    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:08.375179    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:08.375190    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:08.398574    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:08.398581    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:08.402828    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:08.402834    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:08.437667    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:08.437678    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:08.451992    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:08.452007    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:08.464915    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:08.464931    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:09.150150    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:10.982335    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:14.152324    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:14.152429    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:14.163791    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:14.163862    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:14.190971    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:14.191046    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:14.202088    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:14.202158    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:14.212320    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:14.212394    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:14.222886    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:14.222953    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:14.233506    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:14.233578    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:14.243796    6932 logs.go:276] 0 containers: []
	W0624 03:35:14.243806    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:14.243865    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:14.254777    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:14.254795    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:14.254800    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:14.293779    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:14.293789    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:14.308457    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:14.308466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:14.324532    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:14.324544    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:14.336894    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:14.336905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:14.348460    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:14.348472    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:14.363321    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:14.363331    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:14.377015    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:14.377028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:14.387895    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:14.387906    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:14.402012    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:14.402022    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:14.416948    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:14.416957    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:14.428036    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:14.428046    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:14.451993    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:14.452002    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:14.463533    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:14.463547    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:14.467761    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:14.467767    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:14.502452    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:14.502466    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:14.514885    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:14.514902    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:17.034480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:15.983802    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:15.984156    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:16.021323    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:16.021458    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:16.041545    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:16.041644    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:16.060669    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:16.060737    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:16.073356    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:16.073430    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:16.084221    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:16.084293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:16.095330    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:16.095402    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:16.106177    6914 logs.go:276] 0 containers: []
	W0624 03:35:16.106191    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:16.106249    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:16.117280    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:16.117299    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:16.117305    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:16.140082    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:16.140095    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:16.152898    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:16.152909    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:16.164342    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:16.164353    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:16.178499    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:16.178511    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:16.190719    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:16.190730    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:16.203135    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:16.203145    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:16.220695    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:16.220707    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:16.257843    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:16.257851    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:16.293174    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:16.293185    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:16.308786    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:16.308796    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:16.332872    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:16.332881    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:16.344639    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:16.344650    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:16.359373    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:16.359384    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:16.383766    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:16.383776    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:16.401029    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:16.401038    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:16.405708    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:16.405713    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:18.917559    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:22.036907    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:22.037186    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:22.069278    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:22.069409    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:22.087686    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:22.087773    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:22.102437    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:22.102513    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:22.114787    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:22.114859    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:22.125123    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:22.125194    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:22.135912    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:22.135973    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:22.146453    6932 logs.go:276] 0 containers: []
	W0624 03:35:22.146466    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:22.146524    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:22.158964    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:22.158985    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:22.158991    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:22.196809    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:22.196831    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:22.211090    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:22.211102    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:22.225230    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:22.225242    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:22.240914    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:22.240924    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:22.254353    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:22.254363    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:22.265792    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:22.265805    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:22.283937    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:22.283947    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:22.295246    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:22.295258    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:22.309160    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:22.309171    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:22.320379    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:22.320390    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:22.332361    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:22.332372    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:22.356921    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:22.356934    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:22.369627    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:22.369638    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:22.373963    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:22.373970    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:22.407391    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:22.407401    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:22.424869    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:22.424882    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:23.919875    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:23.920043    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:23.936138    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:23.936225    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:23.948886    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:23.948964    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:23.960124    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:23.960194    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:23.970747    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:23.970812    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:23.981207    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:23.981268    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:23.992482    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:23.992554    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:24.002864    6914 logs.go:276] 0 containers: []
	W0624 03:35:24.002876    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:24.002935    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:24.013530    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:24.013546    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:24.013552    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:24.053102    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:24.053110    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:24.066906    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:24.066917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:24.080652    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:24.080662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:24.094873    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:24.094887    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:24.106401    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:24.106415    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:24.120615    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:24.120630    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:24.125211    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:24.125217    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:24.149701    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:24.149716    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:24.163965    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:24.163979    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:24.185421    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:24.185434    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:24.210724    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:24.210744    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:24.224100    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:24.224113    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:24.236637    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:24.236649    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:24.272121    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:24.272136    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:24.286614    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:24.286628    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:24.301426    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:24.301440    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:24.939051    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:26.826708    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:29.941444    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:29.941885    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:29.972764    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:29.972894    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:29.992225    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:29.992318    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:30.006461    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:30.006537    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:30.023077    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:30.023143    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:30.033574    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:30.033649    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:30.044080    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:30.044149    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:30.054517    6932 logs.go:276] 0 containers: []
	W0624 03:35:30.054528    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:30.054585    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:30.065023    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:30.065043    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:30.065048    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:30.079227    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:30.079239    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:30.090701    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:30.090712    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:30.102304    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:30.102314    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:30.125311    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:30.125320    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:30.162223    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:30.162233    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:30.198647    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:30.198658    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:30.212455    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:30.212465    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:30.228027    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:30.228040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:30.244969    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:30.244980    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:30.259109    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:30.259119    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:30.271966    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:30.271977    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:30.283522    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:30.283536    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:30.295031    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:30.295044    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:30.312749    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:30.312765    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:30.317170    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:30.317175    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:30.330759    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:30.330772    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:32.846757    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:31.829441    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:31.829629    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:31.851054    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:31.851154    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:31.866913    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:31.866987    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:31.879152    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:31.879217    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:31.889978    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:31.890046    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:31.900459    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:31.900520    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:31.911751    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:31.911818    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:31.929330    6914 logs.go:276] 0 containers: []
	W0624 03:35:31.929341    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:31.929390    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:31.939795    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:31.939815    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:31.939821    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:31.954907    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:31.954917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:31.971802    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:31.971811    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:31.995313    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:31.995320    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:32.007767    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:32.007777    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:32.048390    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:32.048401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:32.082730    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:32.082740    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:32.098170    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:32.098183    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:32.109561    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:32.109575    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:32.113752    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:32.113758    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:32.127849    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:32.127866    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:32.141413    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:32.141423    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:32.152738    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:32.152749    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:32.166848    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:32.166863    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:32.178459    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:32.178472    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:32.215551    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:32.215565    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:32.229391    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:32.229404    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:34.746714    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:37.847101    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:37.847247    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:37.868541    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:37.868623    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:37.885324    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:37.885386    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:37.897439    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:37.897511    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:37.907978    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:37.908047    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:37.918742    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:37.918804    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:37.929823    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:37.929896    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:37.940202    6932 logs.go:276] 0 containers: []
	W0624 03:35:37.940216    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:37.940271    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:37.950462    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:37.950478    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:37.950484    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:37.961981    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:37.961992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:37.974308    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:37.974319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:37.985516    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:37.985528    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:38.022336    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:38.022347    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:38.026445    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:38.026452    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:38.040975    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:38.040984    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:38.055163    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:38.055175    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:38.068460    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:38.068471    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:38.079990    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:38.080000    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:38.118329    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:38.118339    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:38.129492    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:38.129502    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:38.146625    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:38.146633    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:38.160031    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:38.160040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:38.174083    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:38.174095    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:38.189525    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:38.189534    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:38.212369    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:38.212378    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:39.749338    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:39.749525    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:39.766979    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:39.767060    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:39.780896    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:39.780975    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:39.792161    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:39.792218    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:39.802887    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:39.802958    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:39.812628    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:39.812688    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:39.824771    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:39.824841    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:39.834976    6914 logs.go:276] 0 containers: []
	W0624 03:35:39.834988    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:39.835041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:39.848748    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:39.848763    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:39.848769    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:39.863669    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:39.863680    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:39.877280    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:39.877291    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:39.888423    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:39.888435    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:39.900350    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:39.900363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:39.926199    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:39.926211    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:39.945978    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:39.945989    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:39.980459    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:39.980471    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:39.992425    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:39.992435    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:40.009235    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:40.009245    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:40.046217    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:40.046229    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:40.726473    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:40.050174    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:40.050182    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:40.064103    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:40.064113    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:40.075967    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:40.075979    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:40.088208    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:40.088221    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:40.111258    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:40.111269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:40.126067    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:40.126076    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:42.638895    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:45.729167    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:45.729616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:45.769239    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:45.769373    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:45.791124    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:45.791229    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:45.805920    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:45.805987    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:45.818995    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:45.819076    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:45.830110    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:45.830187    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:45.840934    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:45.841005    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:45.851495    6932 logs.go:276] 0 containers: []
	W0624 03:35:45.851506    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:45.851563    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:45.862066    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:45.862087    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:45.862092    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:45.877210    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:45.877221    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:45.888607    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:45.888616    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:45.911946    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:45.911955    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:45.923838    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:45.923848    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:45.945092    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:45.945103    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:45.958407    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:45.958417    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:45.972657    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:45.972668    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:45.986736    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:45.986747    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:45.998449    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:45.998460    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:46.014369    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:46.014381    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:46.051671    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:46.051679    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:46.086823    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:46.086835    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:46.101061    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:46.101072    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:46.113070    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:46.113083    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:46.117338    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:46.117345    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:46.136309    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:46.136319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:48.652946    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:47.641169    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:47.641398    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:47.665677    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:47.665789    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:47.682394    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:47.682473    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:47.695016    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:47.695079    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:47.707453    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:47.707526    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:47.718916    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:47.718979    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:47.729402    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:47.729464    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:47.739788    6914 logs.go:276] 0 containers: []
	W0624 03:35:47.739799    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:47.739852    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:47.750710    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:47.750731    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:47.750736    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:47.765311    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:47.765322    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:47.776580    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:47.776591    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:47.788576    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:47.788586    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:47.825626    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:47.825640    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:47.850425    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:47.850442    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:47.863820    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:47.863832    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:47.878598    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:47.878609    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:47.893224    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:47.893239    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:47.905075    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:47.905087    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:47.909288    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:47.909294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:47.920235    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:47.920246    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:47.933991    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:47.934002    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:47.946022    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:47.946032    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:47.963227    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:47.963237    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:47.986860    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:47.986867    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:48.024545    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:48.024555    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:53.655339    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:53.655709    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:53.687156    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:35:53.687292    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:53.705494    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:35:53.705598    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:53.719306    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:35:53.719382    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:53.731207    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:35:53.731277    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:53.743278    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:35:53.743351    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:53.753902    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:35:53.753969    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:53.764420    6932 logs.go:276] 0 containers: []
	W0624 03:35:53.764430    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:53.764483    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:53.787939    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:35:53.787958    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:35:53.787963    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:35:53.800436    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:35:53.800450    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:35:53.812017    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:35:53.812028    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:35:53.823773    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:53.823784    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:53.847359    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:53.847369    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:53.851593    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:53.851599    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:53.888406    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:35:53.888420    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:35:53.907876    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:35:53.907887    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:35:53.920152    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:35:53.920164    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:35:53.937845    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:35:53.937860    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:35:53.949243    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:35:53.949257    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:35:53.963185    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:35:53.963198    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:35:53.977539    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:35:53.977554    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:35:53.995776    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:35:53.995785    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:35:54.008056    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:35:54.008071    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:54.022610    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:54.022624    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:54.061913    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:35:54.061923    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:35:50.540760    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:56.578480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:55.543084    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:55.543444    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:55.581434    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:55.581574    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:55.604164    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:55.604291    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:55.619727    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:55.619803    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:55.636843    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:55.636912    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:55.649032    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:55.649097    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:55.659450    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:55.659511    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:55.669596    6914 logs.go:276] 0 containers: []
	W0624 03:35:55.669612    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:55.669664    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:55.680197    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:55.680214    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:55.680220    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:55.714630    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:55.714642    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:55.726852    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:55.726862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:55.741034    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:55.741045    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:55.777600    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:55.777609    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:55.791339    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:55.791350    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:55.805633    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:55.805643    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:55.817052    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:55.817066    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:55.836719    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:55.836731    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:55.867313    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:55.867324    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:55.881126    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:55.881136    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:55.899536    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:55.899547    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:55.914178    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:55.914188    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:55.937843    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:55.937850    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:55.942504    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:55.942510    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:55.956911    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:55.956920    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:55.968308    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:55.968318    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:58.481474    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:01.580122    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:01.580286    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:01.598374    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:01.598465    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:01.611859    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:01.611930    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:01.623153    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:01.623224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:01.633295    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:01.633366    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:01.647042    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:01.647106    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:01.657967    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:01.658037    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:01.668036    6932 logs.go:276] 0 containers: []
	W0624 03:36:01.668051    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:01.668108    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:01.678968    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:01.678987    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:01.678992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:01.696086    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:01.696097    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:01.707412    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:01.707423    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:01.731509    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:01.731517    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:01.743258    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:01.743269    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:01.748130    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:01.748139    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:01.760027    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:01.760040    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:01.773416    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:01.773427    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:01.785695    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:01.785706    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:01.801812    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:01.801823    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:01.813019    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:01.813029    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:01.847716    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:01.847728    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:01.868528    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:01.868540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:01.882093    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:01.882103    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:01.896140    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:01.896151    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:01.908414    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:01.908424    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:01.946719    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:01.946727    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:03.483986    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:03.484269    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:03.511092    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:03.511222    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:03.529166    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:03.529256    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:03.546036    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:03.546116    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:03.560564    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:03.560645    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:03.571816    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:03.571883    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:03.586385    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:03.586519    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:03.596752    6914 logs.go:276] 0 containers: []
	W0624 03:36:03.596762    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:03.596813    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:03.606953    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:03.606969    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:03.606974    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:03.642538    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:03.642548    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:03.657677    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:03.657691    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:03.671846    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:03.671859    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:03.683855    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:03.683864    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:03.695545    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:03.695560    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:03.717384    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:03.717393    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:03.756564    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:03.756571    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:03.779438    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:03.779445    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:03.791889    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:03.791904    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:03.806065    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:03.806079    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:03.833297    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:03.833311    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:03.844548    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:03.844559    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:03.848501    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:03.848507    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:03.860083    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:03.860095    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:03.874134    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:03.874148    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:03.885790    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:03.885801    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:04.460836    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:06.402063    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:09.462816    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:09.462963    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:09.475375    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:09.475457    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:09.486375    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:09.486449    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:09.496543    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:09.496616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:09.506950    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:09.507014    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:09.517556    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:09.517626    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:09.528005    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:09.528072    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:09.541191    6932 logs.go:276] 0 containers: []
	W0624 03:36:09.541204    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:09.541262    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:09.552166    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:09.552185    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:09.552190    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:09.589932    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:09.589941    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:09.626233    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:09.626244    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:09.639114    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:09.639124    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:09.653538    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:09.653552    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:09.671907    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:09.671917    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:09.676848    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:09.676854    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:09.691077    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:09.691091    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:09.708599    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:09.708610    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:09.720495    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:09.720508    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:09.733698    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:09.733711    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:09.749129    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:09.749143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:09.761206    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:09.761216    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:09.773245    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:09.773254    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:09.784465    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:09.784473    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:09.808665    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:09.808672    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:09.819943    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:09.819957    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:12.334071    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:11.404738    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:11.405019    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:11.440907    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:11.441033    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:11.458429    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:11.458521    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:11.471651    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:11.471729    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:11.484086    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:11.484156    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:11.494997    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:11.495070    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:11.505340    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:11.505411    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:11.530576    6914 logs.go:276] 0 containers: []
	W0624 03:36:11.530589    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:11.530650    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:11.558265    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:11.558286    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:11.558291    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:11.570569    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:11.570580    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:11.574894    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:11.574899    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:11.586447    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:11.586458    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:11.598531    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:11.598540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:11.616424    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:11.616433    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:11.627986    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:11.627997    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:11.650438    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:11.650449    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:11.687355    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:11.687367    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:11.712239    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:11.712249    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:11.723902    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:11.723916    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:11.760708    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:11.760716    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:11.775613    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:11.775624    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:11.789927    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:11.789936    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:11.804216    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:11.804227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:11.818180    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:11.818194    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:11.836903    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:11.836912    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:14.349446    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:17.336618    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:17.336823    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:17.356312    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:17.356402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:17.370312    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:17.370389    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:17.381747    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:17.381814    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:17.392492    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:17.392560    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:17.403539    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:17.403604    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:17.413655    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:17.413719    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:17.423727    6932 logs.go:276] 0 containers: []
	W0624 03:36:17.423740    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:17.423797    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:17.433896    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:17.433913    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:17.433918    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:17.448585    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:17.448595    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:17.462206    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:17.462219    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:17.477238    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:17.477250    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:17.499853    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:17.499863    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:17.515221    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:17.515232    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:17.526143    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:17.526157    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:17.541604    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:17.541619    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:17.545943    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:17.545950    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:17.594276    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:17.594288    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:17.608529    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:17.608539    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:17.621776    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:17.621787    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:17.639640    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:17.639650    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:17.651561    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:17.651571    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:17.689830    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:17.689855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:17.703060    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:17.703071    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:17.715089    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:17.715099    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:19.351821    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:19.352058    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:19.368708    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:19.368796    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:19.381836    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:19.381904    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:19.393397    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:19.393469    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:19.403597    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:19.403660    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:19.423126    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:19.423197    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:19.444523    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:19.444587    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:19.456147    6914 logs.go:276] 0 containers: []
	W0624 03:36:19.456158    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:19.456212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:19.466545    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:19.466564    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:19.466569    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:19.477769    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:19.477779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:19.501465    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:19.501477    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:19.513655    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:19.513665    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:19.525342    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:19.525353    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:19.536988    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:19.536998    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:19.571054    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:19.571065    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:19.587972    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:19.587983    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:19.604836    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:19.604846    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:19.622053    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:19.622063    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:19.643410    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:19.643421    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:19.665735    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:19.665742    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:19.670322    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:19.670329    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:19.684259    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:19.684269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:19.695645    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:19.695655    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:19.707934    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:19.707944    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:19.745596    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:19.745603    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:20.230391    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:22.264651    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:27.267039    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:27.267150    6914 kubeadm.go:591] duration metric: took 4m4.017851s to restartPrimaryControlPlane
	W0624 03:36:27.267218    6914 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0624 03:36:27.267251    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0624 03:36:28.302722    6914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.035467375s)
	I0624 03:36:28.302777    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 03:36:28.307653    6914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:36:28.310357    6914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:36:28.312978    6914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:36:28.312984    6914 kubeadm.go:156] found existing configuration files:
	
	I0624 03:36:28.313002    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf
	I0624 03:36:28.315657    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:36:28.315682    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:36:28.318053    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf
	I0624 03:36:28.320744    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:36:28.320765    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:36:28.323780    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf
	I0624 03:36:28.326046    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:36:28.326068    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:36:28.328732    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf
	I0624 03:36:28.331475    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:36:28.331498    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:36:28.333971    6914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 03:36:28.350783    6914 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0624 03:36:28.350820    6914 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 03:36:28.398345    6914 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 03:36:28.398403    6914 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 03:36:28.398459    6914 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 03:36:28.446777    6914 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 03:36:28.451001    6914 out.go:204]   - Generating certificates and keys ...
	I0624 03:36:28.451035    6914 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 03:36:28.451068    6914 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 03:36:28.451103    6914 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 03:36:28.451137    6914 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0624 03:36:28.451171    6914 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0624 03:36:28.451198    6914 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0624 03:36:28.451233    6914 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0624 03:36:28.451263    6914 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0624 03:36:28.451298    6914 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 03:36:28.451334    6914 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 03:36:28.451351    6914 kubeadm.go:309] [certs] Using the existing "sa" key
	I0624 03:36:28.451418    6914 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 03:36:28.731261    6914 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 03:36:28.851690    6914 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 03:36:28.905038    6914 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 03:36:28.991419    6914 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 03:36:29.025663    6914 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 03:36:29.025722    6914 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 03:36:29.025757    6914 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 03:36:29.090464    6914 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 03:36:25.232782    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:25.233224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:25.275174    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:25.275306    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:25.293151    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:25.293241    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:25.306985    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:25.307060    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:25.318931    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:25.318998    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:25.329544    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:25.329605    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:25.340091    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:25.340162    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:25.349831    6932 logs.go:276] 0 containers: []
	W0624 03:36:25.349844    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:25.349902    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:25.359906    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:25.359926    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:25.359931    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:25.372103    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:25.372114    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:25.409426    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:25.409435    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:25.427015    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:25.427024    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:25.438056    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:25.438067    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:25.452382    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:25.452391    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:25.468253    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:25.468265    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:25.503001    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:25.503010    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:25.516527    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:25.516539    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:25.528290    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:25.528301    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:25.542851    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:25.542862    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:25.558341    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:25.558354    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:25.575876    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:25.575887    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:25.587218    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:25.587229    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:25.611121    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:25.611129    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:25.625986    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:25.625997    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:25.630369    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:25.630376    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:28.145235    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:29.094686    6914 out.go:204]   - Booting up control plane ...
	I0624 03:36:29.094728    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 03:36:29.094791    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 03:36:29.094824    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 03:36:29.094861    6914 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 03:36:29.094941    6914 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0624 03:36:33.147476    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:33.147795    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:33.183751    6932 logs.go:276] 2 containers: [d9f26ec806e4 5e68f03fc08d]
	I0624 03:36:33.183891    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:33.209052    6932 logs.go:276] 2 containers: [60e930ef5396 ff24041fb2ac]
	I0624 03:36:33.209146    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:33.237810    6932 logs.go:276] 1 containers: [d10d313c9997]
	I0624 03:36:33.237879    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:33.251948    6932 logs.go:276] 2 containers: [68ab673b3a3b b62fd1734dff]
	I0624 03:36:33.252015    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:33.262721    6932 logs.go:276] 1 containers: [68b5daf9fea2]
	I0624 03:36:33.262788    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:33.273504    6932 logs.go:276] 2 containers: [daa91213266c b8559e67098a]
	I0624 03:36:33.273569    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:33.283882    6932 logs.go:276] 0 containers: []
	W0624 03:36:33.283896    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:33.283947    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:33.295142    6932 logs.go:276] 2 containers: [4b6f327abb5a 8b0230ea6478]
	I0624 03:36:33.295162    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:33.295168    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:33.299986    6932 logs.go:123] Gathering logs for storage-provisioner [8b0230ea6478] ...
	I0624 03:36:33.299994    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0230ea6478"
	I0624 03:36:33.311543    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:33.311557    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:33.335402    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:36:33.335412    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:33.347705    6932 logs.go:123] Gathering logs for kube-apiserver [d9f26ec806e4] ...
	I0624 03:36:33.347717    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f26ec806e4"
	I0624 03:36:33.361845    6932 logs.go:123] Gathering logs for kube-controller-manager [b8559e67098a] ...
	I0624 03:36:33.361855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8559e67098a"
	I0624 03:36:33.374699    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:33.374711    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:33.414789    6932 logs.go:123] Gathering logs for kube-apiserver [5e68f03fc08d] ...
	I0624 03:36:33.414801    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e68f03fc08d"
	I0624 03:36:33.428124    6932 logs.go:123] Gathering logs for etcd [60e930ef5396] ...
	I0624 03:36:33.428138    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e930ef5396"
	I0624 03:36:33.443383    6932 logs.go:123] Gathering logs for storage-provisioner [4b6f327abb5a] ...
	I0624 03:36:33.443398    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b6f327abb5a"
	I0624 03:36:33.457227    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:33.457241    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:33.497470    6932 logs.go:123] Gathering logs for etcd [ff24041fb2ac] ...
	I0624 03:36:33.497484    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff24041fb2ac"
	I0624 03:36:33.511639    6932 logs.go:123] Gathering logs for coredns [d10d313c9997] ...
	I0624 03:36:33.511652    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10d313c9997"
	I0624 03:36:33.523892    6932 logs.go:123] Gathering logs for kube-scheduler [68ab673b3a3b] ...
	I0624 03:36:33.523905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ab673b3a3b"
	I0624 03:36:33.539199    6932 logs.go:123] Gathering logs for kube-scheduler [b62fd1734dff] ...
	I0624 03:36:33.539212    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fd1734dff"
	I0624 03:36:33.555834    6932 logs.go:123] Gathering logs for kube-proxy [68b5daf9fea2] ...
	I0624 03:36:33.555847    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68b5daf9fea2"
	I0624 03:36:33.569131    6932 logs.go:123] Gathering logs for kube-controller-manager [daa91213266c] ...
	I0624 03:36:33.569143    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa91213266c"
	I0624 03:36:33.594998    6914 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505130 seconds
	I0624 03:36:33.595052    6914 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 03:36:33.600695    6914 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 03:36:34.112138    6914 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 03:36:34.112332    6914 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-252000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 03:36:34.615982    6914 kubeadm.go:309] [bootstrap-token] Using token: ig8u9t.o8ynutmdor6z293i
	I0624 03:36:34.622488    6914 out.go:204]   - Configuring RBAC rules ...
	I0624 03:36:34.622552    6914 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 03:36:34.622605    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 03:36:34.627054    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 03:36:34.628028    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 03:36:34.628821    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 03:36:34.629714    6914 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 03:36:34.637512    6914 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 03:36:34.812002    6914 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 03:36:35.019817    6914 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 03:36:35.020344    6914 kubeadm.go:309] 
	I0624 03:36:35.020379    6914 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 03:36:35.020383    6914 kubeadm.go:309] 
	I0624 03:36:35.020429    6914 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 03:36:35.020436    6914 kubeadm.go:309] 
	I0624 03:36:35.020451    6914 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 03:36:35.020480    6914 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 03:36:35.020506    6914 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 03:36:35.020510    6914 kubeadm.go:309] 
	I0624 03:36:35.020540    6914 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 03:36:35.020544    6914 kubeadm.go:309] 
	I0624 03:36:35.020570    6914 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 03:36:35.020574    6914 kubeadm.go:309] 
	I0624 03:36:35.020601    6914 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 03:36:35.020636    6914 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 03:36:35.020670    6914 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 03:36:35.020674    6914 kubeadm.go:309] 
	I0624 03:36:35.020714    6914 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 03:36:35.020755    6914 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 03:36:35.020758    6914 kubeadm.go:309] 
	I0624 03:36:35.020801    6914 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ig8u9t.o8ynutmdor6z293i \
	I0624 03:36:35.020865    6914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 \
	I0624 03:36:35.020875    6914 kubeadm.go:309] 	--control-plane 
	I0624 03:36:35.020879    6914 kubeadm.go:309] 
	I0624 03:36:35.020924    6914 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 03:36:35.020930    6914 kubeadm.go:309] 
	I0624 03:36:35.020976    6914 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ig8u9t.o8ynutmdor6z293i \
	I0624 03:36:35.021030    6914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 
	I0624 03:36:35.021262    6914 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 03:36:35.021275    6914 cni.go:84] Creating CNI manager for ""
	I0624 03:36:35.021285    6914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:36:35.028185    6914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0624 03:36:35.031376    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0624 03:36:35.034169    6914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0624 03:36:35.040766    6914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 03:36:35.040817    6914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-252000 minikube.k8s.io/updated_at=2024_06_24T03_36_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=stopped-upgrade-252000 minikube.k8s.io/primary=true
	I0624 03:36:35.040818    6914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:36:35.075449    6914 kubeadm.go:1107] duration metric: took 34.674666ms to wait for elevateKubeSystemPrivileges
	I0624 03:36:35.080302    6914 ops.go:34] apiserver oom_adj: -16
	W0624 03:36:35.080323    6914 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 03:36:35.080329    6914 kubeadm.go:393] duration metric: took 4m11.845379041s to StartCluster
	I0624 03:36:35.080339    6914 settings.go:142] acquiring lock: {Name:mk350ce6fa96c4a87ff2b5575a8be101ddfe67cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:35.080508    6914 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:36:35.080884    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:35.081101    6914 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:36:35.081120    6914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 03:36:35.081160    6914 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-252000"
	I0624 03:36:35.081172    6914 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-252000"
	I0624 03:36:35.081172    6914 config.go:182] Loaded profile config "stopped-upgrade-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:36:35.081173    6914 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-252000"
	W0624 03:36:35.081191    6914 addons.go:243] addon storage-provisioner should already be in state true
	I0624 03:36:35.081186    6914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-252000"
	I0624 03:36:35.081203    6914 host.go:66] Checking if "stopped-upgrade-252000" exists ...
	I0624 03:36:35.085098    6914 out.go:177] * Verifying Kubernetes components...
	I0624 03:36:35.085840    6914 kapi.go:59] client config for stopped-upgrade-252000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10210ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:36:35.089667    6914 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-252000"
	W0624 03:36:35.089672    6914 addons.go:243] addon default-storageclass should already be in state true
	I0624 03:36:35.089678    6914 host.go:66] Checking if "stopped-upgrade-252000" exists ...
	I0624 03:36:35.090284    6914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:35.090289    6914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 03:36:35.090297    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:36:35.093243    6914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:36:36.091071    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:35.096251    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:36:35.099176    6914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:35.099182    6914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 03:36:35.099187    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:36:35.166205    6914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:36:35.171721    6914 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:36:35.171761    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:36:35.175450    6914 api_server.go:72] duration metric: took 94.338959ms to wait for apiserver process to appear ...
	I0624 03:36:35.175458    6914 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:36:35.175465    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:35.215539    6914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:35.242456    6914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:41.093296    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:41.093375    6932 kubeadm.go:591] duration metric: took 4m5.204984917s to restartPrimaryControlPlane
	W0624 03:36:41.093422    6932 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0624 03:36:41.093442    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0624 03:36:42.106857    6932 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.013412791s)
	I0624 03:36:42.106930    6932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 03:36:42.111793    6932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:36:42.114594    6932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:36:42.117113    6932 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:36:42.117120    6932 kubeadm.go:156] found existing configuration files:
	
	I0624 03:36:42.117142    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf
	I0624 03:36:42.119609    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:36:42.119632    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:36:42.122233    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf
	I0624 03:36:42.124497    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:36:42.124515    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:36:42.127464    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf
	I0624 03:36:42.130941    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:36:42.130983    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:36:42.133632    6932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf
	I0624 03:36:42.136115    6932 kubeadm.go:162] "https://control-plane.minikube.internal:51210" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51210 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:36:42.136139    6932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:36:42.139100    6932 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 03:36:42.155936    6932 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0624 03:36:42.155983    6932 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 03:36:42.216471    6932 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 03:36:42.216529    6932 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 03:36:42.216573    6932 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 03:36:42.265566    6932 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 03:36:42.273805    6932 out.go:204]   - Generating certificates and keys ...
	I0624 03:36:42.273841    6932 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 03:36:42.273874    6932 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 03:36:42.273927    6932 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 03:36:42.273956    6932 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0624 03:36:42.273990    6932 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0624 03:36:42.274013    6932 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0624 03:36:42.274046    6932 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0624 03:36:42.274082    6932 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0624 03:36:42.274119    6932 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 03:36:42.274152    6932 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 03:36:42.274180    6932 kubeadm.go:309] [certs] Using the existing "sa" key
	I0624 03:36:42.274210    6932 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 03:36:42.307482    6932 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 03:36:42.348811    6932 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 03:36:42.444238    6932 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 03:36:42.513080    6932 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 03:36:42.543464    6932 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 03:36:42.543940    6932 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 03:36:42.544063    6932 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 03:36:42.631858    6932 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 03:36:42.640012    6932 out.go:204]   - Booting up control plane ...
	I0624 03:36:42.640067    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 03:36:42.640111    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 03:36:42.640148    6932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 03:36:42.640190    6932 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 03:36:42.640267    6932 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0624 03:36:40.177615    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:40.177694    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:47.138144    6932 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501909 seconds
	I0624 03:36:47.138343    6932 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 03:36:47.141692    6932 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 03:36:47.650952    6932 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 03:36:47.651047    6932 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-398000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 03:36:48.155137    6932 kubeadm.go:309] [bootstrap-token] Using token: abt9zh.ri93u3l2pr9sv07s
	I0624 03:36:48.161393    6932 out.go:204]   - Configuring RBAC rules ...
	I0624 03:36:48.161452    6932 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 03:36:48.161491    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 03:36:48.166377    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 03:36:48.167148    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 03:36:48.167969    6932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 03:36:48.168811    6932 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 03:36:48.172980    6932 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 03:36:48.349732    6932 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 03:36:48.559338    6932 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 03:36:48.559903    6932 kubeadm.go:309] 
	I0624 03:36:48.559938    6932 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 03:36:48.559941    6932 kubeadm.go:309] 
	I0624 03:36:48.560033    6932 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 03:36:48.560038    6932 kubeadm.go:309] 
	I0624 03:36:48.560051    6932 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 03:36:48.560084    6932 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 03:36:48.560117    6932 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 03:36:48.560120    6932 kubeadm.go:309] 
	I0624 03:36:48.560145    6932 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 03:36:48.560150    6932 kubeadm.go:309] 
	I0624 03:36:48.560173    6932 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 03:36:48.560175    6932 kubeadm.go:309] 
	I0624 03:36:48.560199    6932 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 03:36:48.560275    6932 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 03:36:48.560339    6932 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 03:36:48.560345    6932 kubeadm.go:309] 
	I0624 03:36:48.560387    6932 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 03:36:48.560515    6932 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 03:36:48.560520    6932 kubeadm.go:309] 
	I0624 03:36:48.560614    6932 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token abt9zh.ri93u3l2pr9sv07s \
	I0624 03:36:48.560681    6932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 \
	I0624 03:36:48.560695    6932 kubeadm.go:309] 	--control-plane 
	I0624 03:36:48.560697    6932 kubeadm.go:309] 
	I0624 03:36:48.560738    6932 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 03:36:48.560741    6932 kubeadm.go:309] 
	I0624 03:36:48.560796    6932 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token abt9zh.ri93u3l2pr9sv07s \
	I0624 03:36:48.560934    6932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 
	I0624 03:36:48.561022    6932 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 03:36:48.561031    6932 cni.go:84] Creating CNI manager for ""
	I0624 03:36:48.561043    6932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:36:48.565257    6932 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0624 03:36:48.573248    6932 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0624 03:36:48.577640    6932 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0624 03:36:48.584525    6932 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 03:36:48.584611    6932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:36:48.584612    6932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-398000 minikube.k8s.io/updated_at=2024_06_24T03_36_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=running-upgrade-398000 minikube.k8s.io/primary=true
	I0624 03:36:48.629667    6932 kubeadm.go:1107] duration metric: took 45.095917ms to wait for elevateKubeSystemPrivileges
	I0624 03:36:48.629674    6932 ops.go:34] apiserver oom_adj: -16
	W0624 03:36:48.629698    6932 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 03:36:48.629702    6932 kubeadm.go:393] duration metric: took 4m12.755816875s to StartCluster
	I0624 03:36:48.629711    6932 settings.go:142] acquiring lock: {Name:mk350ce6fa96c4a87ff2b5575a8be101ddfe67cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:48.629807    6932 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:36:48.630225    6932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:48.630434    6932 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:36:48.630491    6932 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 03:36:48.630542    6932 config.go:182] Loaded profile config "running-upgrade-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:36:48.630552    6932 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-398000"
	I0624 03:36:48.630565    6932 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-398000"
	I0624 03:36:48.630572    6932 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-398000"
	I0624 03:36:48.630586    6932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-398000"
	W0624 03:36:48.630576    6932 addons.go:243] addon storage-provisioner should already be in state true
	I0624 03:36:48.630624    6932 host.go:66] Checking if "running-upgrade-398000" exists ...
	I0624 03:36:48.631635    6932 kapi.go:59] client config for running-upgrade-398000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/running-upgrade-398000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10655ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:36:48.631761    6932 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-398000"
	W0624 03:36:48.631765    6932 addons.go:243] addon default-storageclass should already be in state true
	I0624 03:36:48.631772    6932 host.go:66] Checking if "running-upgrade-398000" exists ...
	I0624 03:36:48.633278    6932 out.go:177] * Verifying Kubernetes components...
	I0624 03:36:48.633710    6932 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:48.637363    6932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 03:36:48.637369    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:36:48.640213    6932 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:36:48.644191    6932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:36:48.648284    6932 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:48.648291    6932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 03:36:48.648297    6932 sshutil.go:53] new ssh client: &{IP:localhost Port:51144 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/running-upgrade-398000/id_rsa Username:docker}
	I0624 03:36:48.726790    6932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:36:48.731821    6932 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:36:48.731866    6932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:36:48.735983    6932 api_server.go:72] duration metric: took 105.53875ms to wait for apiserver process to appear ...
	I0624 03:36:48.735991    6932 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:36:48.735997    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:48.773546    6932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:48.784476    6932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:45.178499    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:45.178520    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:53.738091    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:53.738136    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:50.178945    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:50.178967    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:58.738470    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:58.738513    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:55.179531    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:55.179571    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:03.738878    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:03.738912    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:00.180318    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:00.180354    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:05.181285    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:05.181324    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0624 03:37:05.568163    6914 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0624 03:37:05.574505    6914 out.go:177] * Enabled addons: storage-provisioner
	I0624 03:37:08.739344    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:08.739376    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:05.584375    6914 addons.go:510] duration metric: took 30.503520709s for enable addons: enabled=[storage-provisioner]
	I0624 03:37:13.739954    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:13.739974    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:10.182554    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:10.182585    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:18.740716    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:18.740776    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0624 03:37:19.115134    6932 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0624 03:37:19.119711    6932 out.go:177] * Enabled addons: storage-provisioner
	I0624 03:37:15.184138    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:15.184179    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:19.131697    6932 addons.go:510] duration metric: took 30.501477125s for enable addons: enabled=[storage-provisioner]
	I0624 03:37:23.741774    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:23.741800    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:20.186135    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:20.186171    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:28.743021    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:28.743067    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:25.188348    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:25.188386    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:33.744707    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:33.744746    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:30.190594    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:30.190615    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:38.745664    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:38.745686    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:35.190903    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:35.191065    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:35.201372    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:35.201446    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:35.211971    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:35.212039    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:35.222233    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:35.222297    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:35.232204    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:35.232273    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:35.242491    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:35.242559    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:35.252693    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:35.252761    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:35.263157    6914 logs.go:276] 0 containers: []
	W0624 03:37:35.263168    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:35.263224    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:35.273266    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:35.273288    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:35.273294    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:35.277723    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:35.277733    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:35.311883    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:35.311899    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:35.326105    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:35.326117    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:35.337752    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:35.337764    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:35.349491    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:35.349502    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:35.366382    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:35.366393    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:35.378849    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:35.378859    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:35.402992    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:35.402999    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:35.413887    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:35.413899    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:35.448749    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:35.448760    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:35.462584    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:35.462593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:35.473738    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:35.473749    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:37.990930    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:43.747842    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:43.747881    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:42.993201    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:42.993307    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:43.004894    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:43.004967    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:43.015293    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:43.015363    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:43.025833    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:43.025894    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:43.036152    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:43.036212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:43.046474    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:43.046532    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:43.057347    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:43.057417    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:43.067875    6914 logs.go:276] 0 containers: []
	W0624 03:37:43.067886    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:43.067938    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:43.078499    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:43.078515    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:43.078520    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:43.089693    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:43.089703    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:43.101335    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:43.101345    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:43.112577    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:43.112591    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:43.136909    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:43.136916    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:43.172293    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:43.172301    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:43.187370    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:43.187381    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:43.201500    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:43.201511    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:43.213052    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:43.213063    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:43.224575    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:43.224587    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:43.229205    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:43.229213    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:43.265218    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:43.265229    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:43.284138    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:43.284150    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:48.748639    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:48.748751    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:48.764117    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:37:48.764185    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:48.774522    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:37:48.774578    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:48.785140    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:37:48.785206    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:48.795701    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:37:48.795760    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:48.806170    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:37:48.806231    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:48.820432    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:37:48.820497    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:48.830135    6932 logs.go:276] 0 containers: []
	W0624 03:37:48.830145    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:48.830192    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:48.842499    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:37:48.842515    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:37:48.842521    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:37:48.853996    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:37:48.854008    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:37:48.865030    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:48.865040    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:48.901362    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:37:48.901380    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:37:48.916314    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:37:48.916324    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:37:48.930533    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:37:48.930541    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:37:48.942512    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:37:48.942521    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:37:48.953914    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:37:48.953925    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:37:48.968934    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:37:48.968944    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:48.979926    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:48.979937    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:48.984461    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:48.984468    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:49.020253    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:37:49.020265    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:37:49.037569    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:49.037580    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:45.804229    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:51.562823    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:50.806856    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:50.807050    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:50.826805    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:50.826900    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:50.843598    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:50.843680    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:50.855774    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:50.855840    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:50.866442    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:50.866505    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:50.877214    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:50.877293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:50.891479    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:50.891550    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:50.903711    6914 logs.go:276] 0 containers: []
	W0624 03:37:50.903722    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:50.903777    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:50.914066    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:50.914084    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:50.914090    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:50.918439    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:50.918446    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:50.952713    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:50.952723    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:50.967221    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:50.967232    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:50.982582    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:50.982593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:50.994582    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:50.994594    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:51.011344    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:51.011354    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:51.035921    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:51.035928    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:51.070869    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:51.070877    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:51.084709    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:51.084719    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:51.097536    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:51.097547    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:51.109793    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:51.109805    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:51.126337    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:51.126348    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:53.640338    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:56.565094    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:56.565275    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:56.578888    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:37:56.578969    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:56.590714    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:37:56.590780    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:56.604523    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:37:56.604589    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:56.615235    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:37:56.615309    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:56.625834    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:37:56.625904    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:56.636545    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:37:56.636616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:56.651791    6932 logs.go:276] 0 containers: []
	W0624 03:37:56.651804    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:56.651857    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:56.662437    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:37:56.662451    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:37:56.662456    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:37:56.679628    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:56.679640    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:56.715050    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:56.715057    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:56.719442    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:37:56.719449    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:37:56.733187    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:37:56.733201    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:37:56.745726    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:37:56.745740    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:37:56.760475    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:56.760485    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:56.783576    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:37:56.783585    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:56.796203    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:56.796215    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:56.834689    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:37:56.834700    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:37:56.848529    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:37:56.848540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:37:56.860309    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:37:56.860319    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:37:56.872358    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:37:56.872368    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:37:58.642192    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:58.642327    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:58.655851    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:58.655920    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:58.666579    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:58.666639    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:58.676859    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:58.676933    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:58.687639    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:58.687705    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:58.698054    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:58.698126    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:58.708544    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:58.708610    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:58.719517    6914 logs.go:276] 0 containers: []
	W0624 03:37:58.719528    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:58.719583    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:58.730323    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:58.730337    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:58.730342    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:58.742044    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:58.742054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:58.753538    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:58.753548    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:58.770588    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:58.770599    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:58.784112    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:58.784122    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:58.818958    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:58.818965    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:58.823031    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:58.823037    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:58.836650    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:58.836661    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:58.858978    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:58.858989    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:58.875495    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:58.875506    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:58.899926    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:58.899940    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:58.912514    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:58.912525    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:58.949508    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:58.949521    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:59.385454    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:01.466115    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:04.387622    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:04.387809    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:04.405479    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:04.405564    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:04.420553    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:04.420645    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:04.433742    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:04.433806    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:04.448274    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:04.448341    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:04.459113    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:04.459182    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:04.469388    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:04.469451    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:04.479709    6932 logs.go:276] 0 containers: []
	W0624 03:38:04.479722    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:04.479778    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:04.489940    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:04.489957    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:04.489963    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:04.506931    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:04.506942    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:04.532217    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:04.532225    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:04.567338    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:04.567344    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:04.602468    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:04.602477    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:04.617201    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:04.617213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:04.631167    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:04.631177    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:04.643187    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:04.643199    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:04.655643    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:04.655655    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:04.660443    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:04.660454    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:04.672197    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:04.672206    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:04.683783    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:04.683793    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:04.698564    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:04.698574    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:07.219516    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:06.468293    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:06.468405    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:06.482962    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:06.483040    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:06.494896    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:06.494962    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:06.505842    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:06.505905    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:06.516916    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:06.516982    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:06.527584    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:06.527651    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:06.537921    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:06.537983    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:06.548701    6914 logs.go:276] 0 containers: []
	W0624 03:38:06.548712    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:06.548759    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:06.559415    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:06.559429    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:06.559434    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:06.571503    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:06.571518    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:06.585753    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:06.585763    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:06.589897    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:06.589903    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:06.624912    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:06.624921    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:06.639663    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:06.639674    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:06.653532    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:06.653545    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:06.664727    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:06.664740    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:06.678968    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:06.678980    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:06.691607    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:06.691617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:06.725052    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:06.725066    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:06.740922    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:06.740939    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:06.765936    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:06.765951    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:09.284760    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:12.221790    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:12.222017    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:12.249441    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:12.249529    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:12.262756    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:12.262836    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:12.274337    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:12.274402    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:12.284752    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:12.284811    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:12.295223    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:12.295284    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:12.305961    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:12.306031    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:12.321087    6932 logs.go:276] 0 containers: []
	W0624 03:38:12.321098    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:12.321151    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:12.331851    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:12.331866    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:12.331872    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:12.344506    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:12.344516    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:12.362187    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:12.362200    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:12.395791    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:12.395798    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:12.434259    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:12.434273    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:12.448665    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:12.448678    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:12.463271    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:12.463281    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:12.477533    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:12.477542    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:12.492545    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:12.492558    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:12.503779    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:12.503789    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:12.515284    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:12.515292    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:12.520133    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:12.520141    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:12.531598    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:12.531611    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:14.287054    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:14.287221    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:14.301200    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:14.301270    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:14.314727    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:14.314796    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:14.325309    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:14.325375    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:14.336072    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:14.336137    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:14.346893    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:14.346964    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:14.356959    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:14.357020    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:14.366966    6914 logs.go:276] 0 containers: []
	W0624 03:38:14.366980    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:14.367036    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:14.376960    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:14.376974    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:14.376979    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:14.411582    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:14.411590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:14.425562    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:14.425572    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:14.437895    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:14.437907    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:14.449451    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:14.449462    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:14.460727    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:14.460738    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:14.484433    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:14.484441    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:14.489165    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:14.489172    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:14.522794    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:14.522809    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:14.537233    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:14.537243    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:14.548687    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:14.548697    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:14.567493    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:14.567504    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:14.584606    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:14.584617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:15.056941    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:17.098202    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:20.057457    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:20.057645    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:20.069774    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:20.069894    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:20.084451    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:20.084520    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:20.095090    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:20.095155    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:20.105531    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:20.105591    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:20.115762    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:20.115815    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:20.126166    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:20.126222    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:20.136384    6932 logs.go:276] 0 containers: []
	W0624 03:38:20.136395    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:20.136450    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:20.146879    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:20.146894    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:20.146899    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:20.181304    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:20.181311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:20.185552    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:20.185559    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:20.199111    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:20.199120    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:20.212962    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:20.212976    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:20.224626    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:20.224639    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:20.249651    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:20.249662    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:20.261475    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:20.261498    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:20.298340    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:20.298353    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:20.309852    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:20.309863    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:20.321734    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:20.321748    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:20.337078    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:20.337092    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:20.355365    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:20.355379    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:22.867202    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:22.100462    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:22.100683    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:22.123876    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:22.123986    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:22.140778    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:22.140851    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:22.153615    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:22.153690    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:22.165023    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:22.165086    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:22.175422    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:22.175488    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:22.186023    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:22.186090    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:22.199532    6914 logs.go:276] 0 containers: []
	W0624 03:38:22.199545    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:22.199613    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:22.211191    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:22.211204    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:22.211210    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:22.223301    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:22.223313    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:22.257237    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:22.257249    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:22.261659    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:22.261665    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:22.277297    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:22.277308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:22.289757    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:22.289766    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:22.304246    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:22.304256    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:22.324168    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:22.324180    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:22.335757    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:22.335766    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:22.373417    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:22.373428    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:22.392510    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:22.392519    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:22.404115    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:22.404125    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:22.415998    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:22.416008    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:24.943402    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:27.869837    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:27.870280    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:27.903929    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:27.904066    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:27.924162    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:27.924268    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:27.943240    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:27.943317    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:27.955030    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:27.955101    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:27.966612    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:27.966693    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:27.982095    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:27.982168    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:27.993038    6932 logs.go:276] 0 containers: []
	W0624 03:38:27.993050    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:27.993107    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:28.003854    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:28.003870    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:28.003876    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:28.023222    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:28.023233    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:28.035650    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:28.035660    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:28.053470    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:28.053479    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:28.065667    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:28.065676    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:28.100340    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:28.100348    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:28.104660    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:28.104665    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:28.140989    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:28.141001    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:28.155090    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:28.155100    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:28.168965    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:28.168980    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:28.181314    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:28.181324    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:28.192877    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:28.192892    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:28.216815    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:28.216822    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:29.945695    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:29.946225    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:29.978398    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:29.978533    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:29.996551    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:29.996654    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:30.013628    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:30.013695    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:30.029559    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:30.029636    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:30.040031    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:30.040092    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:30.729893    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:30.054621    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:30.054686    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:30.064683    6914 logs.go:276] 0 containers: []
	W0624 03:38:30.064698    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:30.064759    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:30.075919    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:30.075938    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:30.075943    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:30.088411    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:30.088425    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:30.105821    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:30.105831    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:30.130762    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:30.130771    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:30.135067    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:30.135074    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:30.154480    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:30.154493    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:30.168378    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:30.168392    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:30.186043    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:30.186056    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:30.197730    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:30.197745    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:30.209148    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:30.209163    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:30.242631    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:30.242638    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:30.285395    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:30.285406    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:30.297567    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:30.297581    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:32.810940    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:35.732055    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:35.732197    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:35.744206    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:35.744285    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:35.755014    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:35.755081    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:35.765430    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:35.765491    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:35.775610    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:35.775676    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:35.786294    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:35.786360    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:35.796590    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:35.796653    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:35.806966    6932 logs.go:276] 0 containers: []
	W0624 03:38:35.806978    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:35.807036    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:35.817862    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:35.817877    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:35.817882    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:35.829261    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:35.829271    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:35.843934    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:35.843944    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:35.855983    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:35.855996    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:35.873489    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:35.873500    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:35.907832    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:35.907843    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:35.912315    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:35.912320    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:35.926386    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:35.926395    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:35.937595    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:35.937605    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:35.961547    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:35.961555    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:35.972610    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:35.972623    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:36.006383    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:36.006396    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:36.025274    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:36.025284    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:38.539096    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:37.812425    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:37.812634    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:37.832983    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:37.833075    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:37.846959    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:37.847021    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:37.859444    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:37.859512    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:37.870437    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:37.870501    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:37.882213    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:37.882278    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:37.893658    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:37.893721    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:37.903751    6914 logs.go:276] 0 containers: []
	W0624 03:38:37.903762    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:37.903819    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:37.913987    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:37.914002    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:37.914007    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:37.931280    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:37.931289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:37.943017    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:37.943027    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:37.976942    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:37.976950    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:37.981340    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:37.981347    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:38.015131    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:38.015141    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:38.029942    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:38.029953    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:38.044245    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:38.044254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:38.056001    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:38.056012    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:38.081254    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:38.081262    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:38.093184    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:38.093196    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:38.108791    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:38.108807    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:38.120690    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:38.120700    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:43.541318    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:43.541480    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:43.560592    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:43.560669    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:43.574646    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:43.574720    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:43.585715    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:43.585780    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:43.596463    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:43.596529    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:43.606994    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:43.607064    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:43.617986    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:43.618050    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:43.628061    6932 logs.go:276] 0 containers: []
	W0624 03:38:43.628073    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:43.628126    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:43.639060    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:43.639076    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:43.639081    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:43.654055    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:43.654065    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:43.666984    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:43.666994    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:43.681624    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:43.681634    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:43.693145    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:43.693155    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:43.704620    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:43.704630    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:43.739329    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:43.739341    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:43.774168    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:43.774180    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:43.788739    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:43.788748    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:43.811938    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:43.811945    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:43.823516    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:43.823526    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:43.828983    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:43.828992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:43.841807    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:43.841817    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:40.634988    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:46.362992    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:45.637173    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:45.637301    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:45.648872    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:45.648944    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:45.659520    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:45.659593    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:45.670423    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:45.670486    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:45.680976    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:45.681033    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:45.691126    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:45.691183    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:45.701227    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:45.701294    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:45.711012    6914 logs.go:276] 0 containers: []
	W0624 03:38:45.711023    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:45.711077    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:45.721418    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:45.721434    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:45.721439    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:45.735179    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:45.735189    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:45.747137    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:45.747148    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:45.751658    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:45.751665    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:45.785794    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:45.785805    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:45.804443    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:45.804453    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:45.818858    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:45.818867    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:45.830866    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:45.830876    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:45.848864    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:45.848878    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:45.882298    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:45.882308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:45.896309    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:45.896320    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:45.912208    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:45.912218    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:45.923775    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:45.923787    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:48.450814    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:51.365289    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:51.365538    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:51.384348    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:51.384438    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:51.398499    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:51.398576    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:51.410010    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:51.410081    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:51.420292    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:51.420360    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:51.432975    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:51.433041    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:51.443559    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:51.443628    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:51.453786    6932 logs.go:276] 0 containers: []
	W0624 03:38:51.453799    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:51.453859    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:51.464093    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:51.464106    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:51.464111    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:51.478202    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:51.478214    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:51.490003    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:51.490014    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:51.501860    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:51.501869    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:51.513792    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:51.513802    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:51.530981    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:51.530992    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:51.543061    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:51.543071    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:51.567727    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:51.567736    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:51.603606    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:51.603617    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:51.608210    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:51.608221    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:38:51.621529    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:51.621540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:51.636742    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:51.636751    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:51.648426    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:51.648434    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:53.452962    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:53.453141    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:53.476222    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:53.476334    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:53.491983    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:53.492056    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:53.505051    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:38:53.505127    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:53.515580    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:53.515644    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:53.527521    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:53.527582    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:53.537661    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:53.537730    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:53.547973    6914 logs.go:276] 0 containers: []
	W0624 03:38:53.547983    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:53.548031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:53.558139    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:53.558155    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:53.558161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:53.574436    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:53.574447    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:53.598999    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:53.599010    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:53.603447    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:38:53.603455    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:38:53.615136    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:38:53.615148    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:38:53.631435    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:53.631446    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:53.646792    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:53.646802    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:53.681509    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:53.681517    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:53.693033    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:53.693043    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:53.729356    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:53.729368    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:53.743678    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:53.743688    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:53.755122    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:53.755132    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:53.772099    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:53.772108    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:53.783346    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:53.783356    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:53.794998    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:53.795009    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:54.186043    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:56.310880    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:59.188263    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:59.188457    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:59.207691    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:38:59.207781    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:59.222006    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:38:59.222068    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:59.233545    6932 logs.go:276] 2 containers: [33271b9f8b21 10af503aede9]
	I0624 03:38:59.233599    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:59.245634    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:38:59.245702    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:59.255859    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:38:59.255918    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:59.265980    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:38:59.266046    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:59.275825    6932 logs.go:276] 0 containers: []
	W0624 03:38:59.275837    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:59.275891    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:59.286467    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:38:59.286483    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:38:59.286488    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:38:59.303410    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:38:59.303420    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:38:59.315018    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:59.315032    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:59.349230    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:59.349237    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:59.353417    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:38:59.353425    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:38:59.367761    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:38:59.367769    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:38:59.379757    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:38:59.379769    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:38:59.391500    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:38:59.391511    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:38:59.403653    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:59.403668    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:59.427580    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:38:59.427592    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:59.438969    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:59.438983    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:59.472647    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:38:59.472657    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:38:59.487057    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:38:59.487071    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:02.002911    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:01.313180    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:01.313294    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:01.326622    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:01.326690    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:01.337439    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:01.337505    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:01.351214    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:01.351284    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:01.363248    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:01.363320    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:01.375760    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:01.375821    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:01.386527    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:01.386593    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:01.396886    6914 logs.go:276] 0 containers: []
	W0624 03:39:01.396902    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:01.396952    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:01.414704    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:01.414720    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:01.414725    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:01.419407    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:01.419414    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:01.431679    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:01.431689    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:01.443299    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:01.443308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:01.454800    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:01.454810    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:01.490673    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:01.490689    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:01.502041    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:01.502051    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:01.513597    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:01.513607    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:01.527768    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:01.527778    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:01.539517    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:01.539527    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:01.563483    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:01.563497    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:01.598006    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:01.598016    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:01.613265    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:01.613274    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:01.640113    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:01.640123    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:01.651921    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:01.651931    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:04.168172    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:07.005178    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:07.005364    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:07.026033    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:07.026133    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:07.039692    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:07.039764    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:07.052112    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:07.052180    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:07.062964    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:07.063028    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:07.074310    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:07.074371    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:07.085709    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:07.085775    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:07.095649    6932 logs.go:276] 0 containers: []
	W0624 03:39:07.095663    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:07.095711    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:07.112565    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:07.112584    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:07.112589    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:07.131848    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:07.131859    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:07.155628    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:07.155637    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:07.167076    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:07.167089    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:07.182524    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:07.182533    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:07.197270    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:07.197289    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:07.211729    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:07.211739    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:07.245174    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:07.245182    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:07.249203    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:07.249209    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:07.261243    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:07.261254    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:07.272714    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:07.272723    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:07.284555    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:07.284569    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:07.298395    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:07.298409    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:07.311427    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:07.311437    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:07.349539    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:07.349548    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:09.170430    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:09.170556    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:09.182424    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:09.182500    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:09.193168    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:09.193277    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:09.203696    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:09.203753    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:09.213648    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:09.213712    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:09.224279    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:09.224333    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:09.234640    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:09.234703    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:09.244440    6914 logs.go:276] 0 containers: []
	W0624 03:39:09.244456    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:09.244507    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:09.255166    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:09.255180    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:09.255185    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:09.288957    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:09.288970    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:09.312073    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:09.312086    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:09.323200    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:09.323213    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:09.337638    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:09.337650    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:09.348407    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:09.348416    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:09.382465    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:09.382473    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:09.396777    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:09.396785    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:09.407892    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:09.407900    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:09.419682    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:09.419690    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:09.431693    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:09.431707    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:09.443813    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:09.443822    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:09.469617    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:09.469626    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:09.473791    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:09.473796    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:09.485545    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:09.485555    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:09.862641    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:12.011975    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:14.864926    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:14.865211    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:14.892864    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:14.892965    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:14.910511    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:14.910596    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:14.923753    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:14.923829    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:14.935452    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:14.935517    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:14.945889    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:14.945958    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:14.956398    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:14.956464    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:14.966445    6932 logs.go:276] 0 containers: []
	W0624 03:39:14.966458    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:14.966508    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:14.976971    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:14.976989    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:14.976995    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:14.981430    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:14.981439    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:15.015471    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:15.015483    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:15.027638    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:15.027650    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:15.063004    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:15.063025    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:15.075567    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:15.075578    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:15.101205    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:15.101213    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:15.115313    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:15.115323    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:15.133147    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:15.133159    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:15.147638    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:15.147653    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:15.161699    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:15.161709    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:15.172907    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:15.172918    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:15.184430    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:15.184444    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:15.196399    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:15.196409    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:15.211079    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:15.211091    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:17.724789    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:17.014536    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:17.014663    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:17.026539    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:17.026617    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:17.037113    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:17.037176    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:17.048560    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:17.048640    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:17.059301    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:17.059372    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:17.070519    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:17.070584    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:17.081737    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:17.081804    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:17.092109    6914 logs.go:276] 0 containers: []
	W0624 03:39:17.092122    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:17.092180    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:17.102642    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:17.102659    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:17.102665    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:17.107032    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:17.107039    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:17.118294    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:17.118304    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:17.132045    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:17.132055    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:17.143649    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:17.143663    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:17.155191    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:17.155202    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:17.188324    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:17.188331    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:17.202741    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:17.202755    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:17.213848    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:17.213858    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:17.225557    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:17.225570    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:17.250353    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:17.250362    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:17.262794    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:17.262805    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:17.299264    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:17.299279    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:17.319915    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:17.319928    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:17.337664    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:17.337678    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:19.851343    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:22.727009    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:22.727166    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:22.741164    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:22.741224    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:22.751949    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:22.752006    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:22.762737    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:22.762809    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:22.773546    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:22.773616    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:22.783976    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:22.784030    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:22.794363    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:22.794421    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:22.804521    6932 logs.go:276] 0 containers: []
	W0624 03:39:22.804533    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:22.804586    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:22.814780    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:22.814797    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:22.814805    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:22.826433    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:22.826448    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:22.837712    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:22.837724    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:22.851824    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:22.851834    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:22.865841    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:22.865855    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:22.877076    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:22.877089    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:22.888529    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:22.888539    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:22.923149    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:22.923160    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:22.934316    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:22.934330    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:22.959289    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:22.959299    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:22.974053    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:22.974062    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:22.989759    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:22.989771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:23.006387    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:23.006398    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:23.018208    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:23.018222    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:23.052431    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:23.052439    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:24.853883    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:24.854067    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:24.869263    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:24.869342    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:24.882112    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:24.882189    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:24.895065    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:24.895135    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:24.906688    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:24.906753    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:24.917220    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:24.917293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:24.927955    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:24.928018    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:24.941407    6914 logs.go:276] 0 containers: []
	W0624 03:39:24.941420    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:24.941474    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:24.951681    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:24.951697    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:24.951704    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:24.965831    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:24.965846    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:24.992000    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:24.992008    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:25.009115    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:25.009128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:25.021070    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:25.021080    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:25.032557    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:25.032567    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:25.045086    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:25.045099    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:25.558625    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:25.058110    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:25.058120    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:25.076360    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:25.076371    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:25.088864    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:25.088873    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:25.122914    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:25.122928    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:25.128473    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:25.128483    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:25.163190    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:25.163207    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:25.174645    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:25.174656    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:25.189305    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:25.189319    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:27.703960    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:30.560998    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:30.561362    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:30.592700    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:30.592837    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:30.615608    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:30.615713    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:30.629449    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:30.629523    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:30.651893    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:30.651958    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:30.662948    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:30.663009    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:30.674396    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:30.674456    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:30.685049    6932 logs.go:276] 0 containers: []
	W0624 03:39:30.685060    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:30.685112    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:30.696970    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:30.696988    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:30.696993    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:30.732302    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:30.732311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:30.736528    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:30.736537    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:30.771137    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:30.771147    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:30.788920    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:30.788929    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:30.806224    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:30.806234    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:30.818151    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:30.818160    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:30.833733    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:30.833743    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:30.845115    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:30.845126    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:30.856821    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:30.856831    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:30.868413    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:30.868424    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:30.883100    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:30.883110    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:30.896175    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:30.896187    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:30.911666    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:30.911676    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:30.937155    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:30.937163    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:33.449331    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:32.705516    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:32.705618    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:32.716572    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:32.716630    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:32.726953    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:32.727025    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:32.738046    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:32.738113    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:32.749147    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:32.749211    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:32.759314    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:32.759372    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:32.769476    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:32.769543    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:32.779874    6914 logs.go:276] 0 containers: []
	W0624 03:39:32.779886    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:32.779939    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:32.790253    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:32.790268    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:32.790273    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:32.825036    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:32.825044    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:32.836694    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:32.836706    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:32.856847    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:32.856856    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:32.868305    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:32.868317    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:32.872582    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:32.872590    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:32.907674    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:32.907684    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:32.923169    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:32.923178    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:32.934951    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:32.934962    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:32.956150    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:32.956158    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:32.969854    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:32.969862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:32.981218    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:32.981227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:32.993243    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:32.993254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:33.007926    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:33.007936    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:33.019419    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:33.019429    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:38.451617    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:38.451763    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:38.464445    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:38.464525    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:38.475838    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:38.475916    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:38.486137    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:38.486203    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:38.496760    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:38.496831    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:38.512557    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:38.512628    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:38.523056    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:38.523118    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:38.533545    6932 logs.go:276] 0 containers: []
	W0624 03:39:38.533556    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:38.533613    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:38.544081    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:38.544097    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:38.544102    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:38.580774    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:38.580790    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:38.593546    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:38.593561    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:38.631331    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:38.631342    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:38.646620    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:38.646629    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:38.658593    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:38.658604    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:38.677762    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:38.677771    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:38.695173    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:38.695182    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:38.709301    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:38.709317    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:38.721431    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:38.721441    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:38.732570    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:38.732581    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:38.736808    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:38.736818    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:38.751569    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:38.751579    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:38.763851    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:38.763864    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:38.782134    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:38.782146    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:35.545150    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:41.309055    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:40.547694    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:40.548019    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:40.584604    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:40.584733    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:40.610951    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:40.611027    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:40.623505    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:40.623599    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:40.637839    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:40.637905    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:40.648815    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:40.648879    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:40.660118    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:40.660190    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:40.673709    6914 logs.go:276] 0 containers: []
	W0624 03:39:40.673720    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:40.673776    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:40.689165    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:40.689182    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:40.689187    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:40.704151    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:40.704161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:40.717747    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:40.717759    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:40.730351    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:40.730363    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:40.765435    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:40.765445    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:40.779815    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:40.779825    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:40.791184    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:40.791193    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:40.803517    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:40.803531    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:40.818009    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:40.818018    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:40.843294    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:40.843302    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:40.861145    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:40.861155    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:40.873729    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:40.873740    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:40.909657    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:40.909672    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:40.914516    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:40.914523    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:40.926618    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:40.926629    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:43.440853    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:46.311324    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:46.311503    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:46.327810    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:46.327897    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:46.340831    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:46.340906    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:46.351953    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:46.352025    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:46.366450    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:46.366509    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:46.377328    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:46.377394    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:46.388840    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:46.388901    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:46.399707    6932 logs.go:276] 0 containers: []
	W0624 03:39:46.399718    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:46.399777    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:46.410803    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:46.410820    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:46.410825    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:46.429115    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:46.429128    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:46.448424    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:46.448434    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:46.459692    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:46.459706    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:46.483081    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:46.483090    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:46.498938    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:46.498950    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:46.533886    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:46.533896    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:46.547981    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:46.547992    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:46.552610    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:46.552619    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:46.563877    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:46.563888    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:46.575962    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:46.575977    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:46.591106    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:46.591117    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:46.603758    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:46.603774    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:46.616362    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:46.616373    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:46.633947    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:46.633957    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:48.443062    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:48.443166    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:48.454169    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:48.454248    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:48.464823    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:48.464890    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:48.475896    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:48.475968    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:48.492685    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:48.492756    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:48.503192    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:48.503262    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:48.514041    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:48.514109    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:48.524483    6914 logs.go:276] 0 containers: []
	W0624 03:39:48.524494    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:48.524549    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:48.535327    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:48.535344    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:48.535349    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:48.548104    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:48.548114    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:48.552339    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:48.552345    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:48.566678    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:48.566688    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:48.578694    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:48.578704    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:48.590476    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:48.590489    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:48.602357    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:48.602368    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:48.614348    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:48.614360    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:48.649038    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:48.649059    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:48.725689    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:48.725705    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:48.740499    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:48.740512    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:48.756144    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:48.756158    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:48.773212    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:48.773225    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:48.798193    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:48.798201    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:48.816672    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:48.816687    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:49.173384    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:51.330693    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:54.175596    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:54.175805    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:54.196556    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:39:54.196647    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:54.212852    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:39:54.212942    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:54.225080    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:39:54.225142    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:54.236164    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:39:54.236230    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:54.248269    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:39:54.248341    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:54.258665    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:39:54.258726    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:54.268923    6932 logs.go:276] 0 containers: []
	W0624 03:39:54.268934    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:54.268981    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:54.279292    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:39:54.279309    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:39:54.279314    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:39:54.294964    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:39:54.294975    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:39:54.307087    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:39:54.307097    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:39:54.318841    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:39:54.318851    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:39:54.333267    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:39:54.333278    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:39:54.351838    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:39:54.351847    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:54.364374    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:39:54.364384    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:39:54.378300    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:54.378309    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:54.402275    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:54.402282    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:54.406945    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:39:54.406951    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:39:54.422607    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:54.422618    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:54.459238    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:54.459247    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:54.493883    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:39:54.493893    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:39:54.508275    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:39:54.508291    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:39:54.521110    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:39:54.521126    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:39:57.038662    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:56.332956    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:56.333179    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:56.359168    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:56.359284    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:56.377286    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:56.377361    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:56.391241    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:56.391315    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:56.402863    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:56.402928    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:56.413242    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:56.413310    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:56.424528    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:56.424599    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:56.439132    6914 logs.go:276] 0 containers: []
	W0624 03:39:56.439144    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:56.439199    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:56.449185    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:56.449202    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:56.449208    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:56.460861    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:56.460871    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:56.486428    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:56.486436    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:56.521631    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:56.521642    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:56.536624    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:56.536634    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:56.548097    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:56.548108    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:56.559648    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:56.559657    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:56.573031    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:56.573042    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:56.590545    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:56.590556    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:56.601869    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:56.601881    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:56.635900    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:56.635910    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:56.640156    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:56.640165    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:56.654660    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:56.654671    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:56.673333    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:56.673344    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:56.690144    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:56.690154    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:59.207973    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:02.040891    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:02.041098    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:02.060654    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:02.060733    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:02.074711    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:02.074787    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:02.086971    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:02.087071    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:02.097929    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:02.098003    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:02.108176    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:02.108244    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:02.121673    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:02.121737    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:02.132059    6932 logs.go:276] 0 containers: []
	W0624 03:40:02.132071    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:02.132126    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:02.146637    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:02.146663    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:02.146670    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:02.158075    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:02.158085    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:02.162991    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:02.162998    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:02.197581    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:02.197592    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:02.209270    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:02.209279    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:02.220833    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:02.220844    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:02.237184    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:02.237195    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:02.251091    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:02.251101    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:02.263065    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:02.263076    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:02.278173    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:02.278183    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:02.296642    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:02.296652    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:02.307872    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:02.307881    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:02.320052    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:02.320062    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:02.353725    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:02.353732    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:02.369725    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:02.369734    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:04.210182    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:04.210339    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:04.223887    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:04.223963    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:04.234860    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:04.234925    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:04.245455    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:04.245526    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:04.255672    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:04.255731    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:04.266063    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:04.266125    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:04.276772    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:04.276835    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:04.287859    6914 logs.go:276] 0 containers: []
	W0624 03:40:04.287871    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:04.287928    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:04.297962    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:04.297980    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:04.297985    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:04.309545    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:04.309554    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:04.333894    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:04.333902    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:04.368174    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:04.368184    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:04.383247    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:04.383257    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:04.400398    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:04.400408    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:04.419242    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:04.419254    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:04.432538    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:04.432551    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:04.465654    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:04.465662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:04.480224    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:04.480233    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:04.495498    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:04.495512    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:04.509766    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:04.509779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:04.521497    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:04.521506    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:04.532948    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:04.532962    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:04.537668    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:04.537677    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:04.896892    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:07.050710    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:09.898382    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:09.898587    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:09.920532    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:09.920630    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:09.935805    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:09.935887    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:09.948366    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:09.948446    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:09.960370    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:09.960436    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:09.970739    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:09.970798    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:09.981192    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:09.981253    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:09.990870    6932 logs.go:276] 0 containers: []
	W0624 03:40:09.990884    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:09.990945    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:10.006038    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:10.006056    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:10.006060    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:10.020657    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:10.020668    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:10.032266    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:10.032276    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:10.044038    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:10.044048    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:10.061674    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:10.061684    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:10.066352    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:10.066360    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:10.078240    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:10.078251    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:10.092689    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:10.092701    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:10.106551    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:10.106562    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:10.118386    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:10.118398    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:10.131419    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:10.131429    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:10.167403    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:10.167413    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:10.181950    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:10.181960    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:10.205259    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:10.205266    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:10.219225    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:10.219236    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:12.757129    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:12.053119    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:12.053402    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:12.103253    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:12.103371    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:12.120194    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:12.120276    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:12.133413    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:12.133493    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:12.144958    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:12.145025    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:12.155820    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:12.155885    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:12.171613    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:12.171686    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:12.182143    6914 logs.go:276] 0 containers: []
	W0624 03:40:12.182155    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:12.182212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:12.193549    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:12.193565    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:12.193570    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:12.205950    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:12.205964    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:12.217864    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:12.217873    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:12.240930    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:12.240937    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:12.255417    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:12.255431    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:12.267486    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:12.267498    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:12.280136    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:12.280169    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:12.315687    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:12.315694    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:12.331980    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:12.331995    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:12.343818    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:12.343834    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:12.355994    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:12.356003    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:12.391595    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:12.391606    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:12.406496    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:12.406506    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:12.418321    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:12.418332    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:12.436389    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:12.436402    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:14.943431    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:17.759383    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:17.759521    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:17.780220    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:17.780301    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:17.794885    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:17.794952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:17.810931    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:17.810998    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:17.821741    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:17.821800    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:17.831820    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:17.831879    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:17.842089    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:17.842159    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:17.852030    6932 logs.go:276] 0 containers: []
	W0624 03:40:17.852043    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:17.852099    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:17.862679    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:17.862696    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:17.862702    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:17.899103    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:17.899113    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:17.913374    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:17.913388    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:17.949340    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:17.949348    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:17.961868    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:17.961878    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:17.986321    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:17.986329    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:17.997759    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:17.997769    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:18.002655    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:18.002662    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:18.014732    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:18.014743    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:18.032993    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:18.033004    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:18.044395    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:18.044404    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:18.056244    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:18.056255    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:18.068228    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:18.068239    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:18.080044    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:18.080057    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:18.095173    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:18.095183    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:19.945713    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:19.945920    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:19.964107    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:19.964199    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:19.977220    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:19.977291    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:19.989084    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:19.989152    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:19.999890    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:19.999962    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:20.010413    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:20.010481    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:20.020350    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:20.020413    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:20.030477    6914 logs.go:276] 0 containers: []
	W0624 03:40:20.030490    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:20.030548    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:20.041048    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:20.041068    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:20.041073    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:20.611172    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:20.053169    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:20.053179    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:20.070858    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:20.070868    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:20.094241    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:20.094252    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:20.127146    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:20.127153    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:20.132226    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:20.132237    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:20.143706    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:20.143715    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:20.155871    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:20.155882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:20.170658    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:20.170669    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:20.206356    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:20.206366    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:20.225757    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:20.225766    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:20.239809    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:20.239818    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:20.251216    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:20.251227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:20.262137    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:20.262147    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:20.273942    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:20.273952    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:22.788159    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:25.613490    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:25.613992    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:25.653338    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:25.653471    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:25.675354    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:25.675463    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:25.691963    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:25.692045    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:25.713169    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:25.713243    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:25.730998    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:25.731080    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:25.747460    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:25.747542    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:25.758357    6932 logs.go:276] 0 containers: []
	W0624 03:40:25.758371    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:25.758434    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:25.769082    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:25.769101    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:25.769106    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:25.783308    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:25.783318    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:25.795530    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:25.795540    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:25.813400    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:25.813411    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:25.825653    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:25.825662    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:25.848527    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:25.848537    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:25.860521    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:25.860534    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:25.872321    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:25.872331    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:25.887848    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:25.887862    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:25.899472    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:25.899486    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:25.934782    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:25.934791    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:25.939000    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:25.939008    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:25.974787    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:25.974802    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:25.987219    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:25.987230    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:25.999428    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:25.999440    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:28.516380    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:27.790515    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:27.790683    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:27.803678    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:27.803746    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:27.814660    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:27.814726    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:27.825443    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:27.825515    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:27.839288    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:27.839356    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:27.849173    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:27.849253    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:27.859306    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:27.859369    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:27.869788    6914 logs.go:276] 0 containers: []
	W0624 03:40:27.869802    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:27.869859    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:27.880541    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:27.880574    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:27.880580    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:27.886233    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:27.886241    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:27.900163    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:27.900176    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:27.911912    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:27.911926    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:27.935076    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:27.935084    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:27.947371    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:27.947384    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:27.962937    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:27.962948    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:27.975480    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:27.975493    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:27.987276    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:27.987289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:27.999200    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:27.999209    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:28.013853    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:28.013862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:28.031369    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:28.031382    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:28.042173    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:28.042187    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:28.077490    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:28.077501    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:28.111890    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:28.111905    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:33.518608    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:33.518795    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:33.536443    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:33.536524    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:33.549427    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:33.549499    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:33.560538    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:33.560617    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:33.571054    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:33.571128    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:33.585627    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:33.585697    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:33.596720    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:33.596784    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:33.606881    6932 logs.go:276] 0 containers: []
	W0624 03:40:33.606893    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:33.606952    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:33.617937    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:33.617955    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:33.617961    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:33.629471    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:33.629483    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:33.641169    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:33.641179    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:33.652424    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:33.652435    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:33.664142    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:33.664152    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:33.684621    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:33.684633    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:33.698712    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:33.698724    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:33.710300    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:33.710311    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:33.724039    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:33.724050    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:33.729976    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:33.729984    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:33.765123    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:33.765136    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:33.779779    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:33.779794    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:33.816164    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:33.816173    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:33.831081    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:33.831090    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:33.842736    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:33.842746    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:30.627932    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:35.630331    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:35.633640    6914 out.go:177] 
	W0624 03:40:35.637629    6914 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0624 03:40:35.637635    6914 out.go:239] * 
	W0624 03:40:35.638128    6914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:40:35.649635    6914 out.go:177] 
	I0624 03:40:36.367589    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:41.369760    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:41.369967    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:41.389203    6932 logs.go:276] 1 containers: [d0fb4cd6ba25]
	I0624 03:40:41.389305    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:41.403076    6932 logs.go:276] 1 containers: [381fa4fa6f17]
	I0624 03:40:41.403154    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:41.416339    6932 logs.go:276] 4 containers: [a467a2817ca1 3f031f564e84 33271b9f8b21 10af503aede9]
	I0624 03:40:41.416404    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:41.428461    6932 logs.go:276] 1 containers: [67e42add171f]
	I0624 03:40:41.428525    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:41.442435    6932 logs.go:276] 1 containers: [863cf9795cb3]
	I0624 03:40:41.442503    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:41.452746    6932 logs.go:276] 1 containers: [c6e961745c7e]
	I0624 03:40:41.452811    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:41.462953    6932 logs.go:276] 0 containers: []
	W0624 03:40:41.462967    6932 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:41.463019    6932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:41.474070    6932 logs.go:276] 1 containers: [141da94e6c85]
	I0624 03:40:41.474087    6932 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:41.474093    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:41.509550    6932 logs.go:123] Gathering logs for coredns [3f031f564e84] ...
	I0624 03:40:41.509560    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f031f564e84"
	I0624 03:40:41.521428    6932 logs.go:123] Gathering logs for coredns [33271b9f8b21] ...
	I0624 03:40:41.521437    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33271b9f8b21"
	I0624 03:40:41.532905    6932 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:41.532915    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:41.555505    6932 logs.go:123] Gathering logs for kube-apiserver [d0fb4cd6ba25] ...
	I0624 03:40:41.555515    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fb4cd6ba25"
	I0624 03:40:41.570797    6932 logs.go:123] Gathering logs for coredns [a467a2817ca1] ...
	I0624 03:40:41.570809    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a467a2817ca1"
	I0624 03:40:41.583013    6932 logs.go:123] Gathering logs for kube-scheduler [67e42add171f] ...
	I0624 03:40:41.583023    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67e42add171f"
	I0624 03:40:41.598927    6932 logs.go:123] Gathering logs for kube-controller-manager [c6e961745c7e] ...
	I0624 03:40:41.598938    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e961745c7e"
	I0624 03:40:41.617101    6932 logs.go:123] Gathering logs for container status ...
	I0624 03:40:41.617111    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:41.629350    6932 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:41.629360    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:41.664940    6932 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:41.664956    6932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:41.669560    6932 logs.go:123] Gathering logs for etcd [381fa4fa6f17] ...
	I0624 03:40:41.669565    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 381fa4fa6f17"
	I0624 03:40:41.684032    6932 logs.go:123] Gathering logs for coredns [10af503aede9] ...
	I0624 03:40:41.684046    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10af503aede9"
	I0624 03:40:41.695532    6932 logs.go:123] Gathering logs for kube-proxy [863cf9795cb3] ...
	I0624 03:40:41.695550    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863cf9795cb3"
	I0624 03:40:41.706891    6932 logs.go:123] Gathering logs for storage-provisioner [141da94e6c85] ...
	I0624 03:40:41.706905    6932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141da94e6c85"
	I0624 03:40:44.220480    6932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:49.222758    6932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:49.227140    6932 out.go:177] 
	W0624 03:40:49.230146    6932 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0624 03:40:49.230154    6932 out.go:239] * 
	W0624 03:40:49.230663    6932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:40:49.241004    6932 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-06-24 10:31:46 UTC, ends at Mon 2024-06-24 10:41:05 UTC. --
	Jun 24 10:40:50 running-upgrade-398000 dockerd[3215]: time="2024-06-24T10:40:50.018816933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:40:50 running-upgrade-398000 dockerd[3215]: time="2024-06-24T10:40:50.018846683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:40:50 running-upgrade-398000 dockerd[3215]: time="2024-06-24T10:40:50.018852642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:40:50 running-upgrade-398000 dockerd[3215]: time="2024-06-24T10:40:50.019031016Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0c03e140e5ae1ec9e16cf820ae78b48f65764bf822499daccd0af6d553f4c18f pid=18872 runtime=io.containerd.runc.v2
	Jun 24 10:40:50 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:50Z" level=error msg="ContainerStats resp: {0x40008d40c0 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000864280 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000905900 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000864b80 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000864d80 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x400088b000 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000865480 linux}"
	Jun 24 10:40:51 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:51Z" level=error msg="ContainerStats resp: {0x4000865bc0 linux}"
	Jun 24 10:40:56 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:40:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 24 10:41:01 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 24 10:41:02 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:02Z" level=error msg="ContainerStats resp: {0x40008fcbc0 linux}"
	Jun 24 10:41:02 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:02Z" level=error msg="ContainerStats resp: {0x4000928080 linux}"
	Jun 24 10:41:03 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:03Z" level=error msg="ContainerStats resp: {0x40004ab3c0 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x40008646c0 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x4000358440 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x4000864880 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x400088a400 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x4000865c40 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x400088ae40 linux}"
	Jun 24 10:41:04 running-upgrade-398000 cri-dockerd[3055]: time="2024-06-24T10:41:04Z" level=error msg="ContainerStats resp: {0x400088b000 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0c03e140e5ae1       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   f5a9ee7776101
	64a6408f3dc68       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   fb34b689080f0
	a467a2817ca16       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f5a9ee7776101
	3f031f564e841       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   fb34b689080f0
	863cf9795cb38       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   bf68276486046
	141da94e6c852       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   04b192e30196d
	381fa4fa6f17f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e1da63ab8b015
	c6e961745c7e6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   e54aec86a6a21
	67e42add171f6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   eee332292daa0
	d0fb4cd6ba257       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   2bd94794ffc82
	
	
	==> coredns [0c03e140e5ae] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6533830510722464097.4088767514400438026. HINFO: read udp 10.244.0.2:59620->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6533830510722464097.4088767514400438026. HINFO: read udp 10.244.0.2:53973->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6533830510722464097.4088767514400438026. HINFO: read udp 10.244.0.2:56022->10.0.2.3:53: i/o timeout
	
	
	==> coredns [3f031f564e84] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:39499->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:52918->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:59608->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:48214->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:35326->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:44354->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:33022->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:47700->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:48596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 107429805341726764.8087091565259898135. HINFO: read udp 10.244.0.3:39106->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [64a6408f3dc6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5574994734886296325.6777392922581660237. HINFO: read udp 10.244.0.3:39977->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5574994734886296325.6777392922581660237. HINFO: read udp 10.244.0.3:38743->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5574994734886296325.6777392922581660237. HINFO: read udp 10.244.0.3:47181->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a467a2817ca1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:40651->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:59917->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:37806->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:40271->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:59252->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:43569->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:58458->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:58233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:46615->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6833447576157427714.7333177408642293442. HINFO: read udp 10.244.0.2:44470->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-398000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-398000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=running-upgrade-398000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T03_36_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 10:36:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-398000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 10:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 10:36:48 +0000   Mon, 24 Jun 2024 10:36:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 10:36:48 +0000   Mon, 24 Jun 2024 10:36:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 10:36:48 +0000   Mon, 24 Jun 2024 10:36:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 10:36:48 +0000   Mon, 24 Jun 2024 10:36:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-398000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2c7fe2087824be48e7c1c3b63dfe758
	  System UUID:                b2c7fe2087824be48e7c1c3b63dfe758
	  Boot ID:                    4e7de67e-4ce6-4b78-a475-6edfa8f05ac7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-cfcwx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-gsf8t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-398000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-398000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-398000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-lxgwf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-398000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-398000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-398000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-398000 event: Registered Node running-upgrade-398000 in Controller
	
	
	==> dmesg <==
	[  +1.663775] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.162913] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.081189] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.084885] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.237264] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.080619] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[Jun24 10:32] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +10.159222] systemd-fstab-generator[1917]: Ignoring "noauto" for root device
	[  +4.052943] systemd-fstab-generator[2198]: Ignoring "noauto" for root device
	[  +0.147550] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[  +0.089762] systemd-fstab-generator[2244]: Ignoring "noauto" for root device
	[  +0.089662] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[ +13.265322] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.216932] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.089492] systemd-fstab-generator[3023]: Ignoring "noauto" for root device
	[  +0.084153] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +0.098511] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +2.320777] systemd-fstab-generator[3202]: Ignoring "noauto" for root device
	[  +2.700230] systemd-fstab-generator[3559]: Ignoring "noauto" for root device
	[  +1.082391] systemd-fstab-generator[3701]: Ignoring "noauto" for root device
	[ +22.124213] kauditd_printk_skb: 68 callbacks suppressed
	[Jun24 10:36] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.236817] systemd-fstab-generator[11941]: Ignoring "noauto" for root device
	[  +5.632997] systemd-fstab-generator[12547]: Ignoring "noauto" for root device
	[  +0.458210] systemd-fstab-generator[12682]: Ignoring "noauto" for root device
	
	
	==> etcd [381fa4fa6f17] <==
	{"level":"info","ts":"2024-06-24T10:36:43.844Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-24T10:36:43.850Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-06-24T10:36:44.591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-24T10:36:44.592Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T10:36:44.593Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T10:36:44.593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T10:36:44.593Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T10:36:44.593Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-398000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-24T10:36:44.593Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T10:36:44.594Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T10:36:44.595Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-06-24T10:36:44.595Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-24T10:36:44.594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-24T10:36:44.595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:41:05 up 9 min,  0 users,  load average: 0.16, 0.26, 0.16
	Linux running-upgrade-398000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d0fb4cd6ba25] <==
	I0624 10:36:45.929841       1 controller.go:611] quota admission added evaluator for: namespaces
	I0624 10:36:45.948563       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0624 10:36:45.949816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 10:36:45.952009       1 cache.go:39] Caches are synced for autoregister controller
	I0624 10:36:45.952061       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0624 10:36:45.952193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 10:36:45.956224       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0624 10:36:46.684964       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0624 10:36:46.850631       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0624 10:36:46.851883       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0624 10:36:46.851921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 10:36:46.984717       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 10:36:46.995778       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 10:36:47.034002       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0624 10:36:47.036101       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0624 10:36:47.036456       1 controller.go:611] quota admission added evaluator for: endpoints
	I0624 10:36:47.037739       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 10:36:47.992728       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0624 10:36:48.370622       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0624 10:36:48.374574       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0624 10:36:48.380500       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0624 10:36:48.433597       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 10:37:01.146426       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0624 10:37:01.748572       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0624 10:37:02.220950       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c6e961745c7e] <==
	I0624 10:37:00.856129       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0624 10:37:00.857217       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0624 10:37:00.859346       1 shared_informer.go:262] Caches are synced for endpoint
	I0624 10:37:00.943294       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0624 10:37:00.962674       1 shared_informer.go:262] Caches are synced for deployment
	I0624 10:37:00.964863       1 shared_informer.go:262] Caches are synced for disruption
	I0624 10:37:00.964911       1 disruption.go:371] Sending events to api server.
	I0624 10:37:00.967033       1 shared_informer.go:262] Caches are synced for job
	I0624 10:37:00.968106       1 shared_informer.go:262] Caches are synced for cronjob
	I0624 10:37:00.992809       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0624 10:37:01.003671       1 shared_informer.go:262] Caches are synced for stateful set
	I0624 10:37:01.007824       1 shared_informer.go:262] Caches are synced for attach detach
	I0624 10:37:01.017636       1 shared_informer.go:262] Caches are synced for expand
	I0624 10:37:01.041833       1 shared_informer.go:262] Caches are synced for persistent volume
	I0624 10:37:01.043041       1 shared_informer.go:262] Caches are synced for ephemeral
	I0624 10:37:01.047286       1 shared_informer.go:262] Caches are synced for resource quota
	I0624 10:37:01.047803       1 shared_informer.go:262] Caches are synced for resource quota
	I0624 10:37:01.054564       1 shared_informer.go:262] Caches are synced for PVC protection
	I0624 10:37:01.149317       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lxgwf"
	I0624 10:37:01.465614       1 shared_informer.go:262] Caches are synced for garbage collector
	I0624 10:37:01.543387       1 shared_informer.go:262] Caches are synced for garbage collector
	I0624 10:37:01.543399       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0624 10:37:01.749710       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0624 10:37:01.848113       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-cfcwx"
	I0624 10:37:01.854480       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gsf8t"
	
	
	==> kube-proxy [863cf9795cb3] <==
	I0624 10:37:02.209315       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0624 10:37:02.209337       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0624 10:37:02.209346       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0624 10:37:02.219008       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0624 10:37:02.219020       1 server_others.go:206] "Using iptables Proxier"
	I0624 10:37:02.219031       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0624 10:37:02.219430       1 server.go:661] "Version info" version="v1.24.1"
	I0624 10:37:02.219436       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 10:37:02.219670       1 config.go:317] "Starting service config controller"
	I0624 10:37:02.219675       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0624 10:37:02.219684       1 config.go:226] "Starting endpoint slice config controller"
	I0624 10:37:02.219686       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0624 10:37:02.220043       1 config.go:444] "Starting node config controller"
	I0624 10:37:02.220046       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0624 10:37:02.321312       1 shared_informer.go:262] Caches are synced for node config
	I0624 10:37:02.321450       1 shared_informer.go:262] Caches are synced for service config
	I0624 10:37:02.321495       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [67e42add171f] <==
	W0624 10:36:45.919851       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0624 10:36:45.920950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0624 10:36:45.919863       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0624 10:36:45.919882       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0624 10:36:45.919895       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 10:36:45.919907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0624 10:36:45.919919       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 10:36:45.919930       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0624 10:36:45.919995       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 10:36:45.921035       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 10:36:45.921052       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 10:36:45.921056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0624 10:36:45.921058       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0624 10:36:45.921060       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 10:36:45.921062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0624 10:36:45.921064       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0624 10:36:46.767412       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 10:36:46.767429       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 10:36:46.791336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 10:36:46.791388       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 10:36:46.880631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0624 10:36:46.880650       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0624 10:36:46.940032       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0624 10:36:46.940130       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 10:36:47.112642       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-06-24 10:31:46 UTC, ends at Mon 2024-06-24 10:41:05 UTC. --
	Jun 24 10:37:00 running-upgrade-398000 kubelet[12553]: I0624 10:37:00.819721   12553 topology_manager.go:200] "Topology Admit Handler"
	Jun 24 10:37:00 running-upgrade-398000 kubelet[12553]: I0624 10:37:00.914098   12553 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 24 10:37:00 running-upgrade-398000 kubelet[12553]: I0624 10:37:00.914632   12553 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.015168   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/010ce802-e23e-4683-ae86-1fcc159d3679-tmp\") pod \"storage-provisioner\" (UID: \"010ce802-e23e-4683-ae86-1fcc159d3679\") " pod="kube-system/storage-provisioner"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.015194   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jq8g\" (UniqueName: \"kubernetes.io/projected/010ce802-e23e-4683-ae86-1fcc159d3679-kube-api-access-9jq8g\") pod \"storage-provisioner\" (UID: \"010ce802-e23e-4683-ae86-1fcc159d3679\") " pod="kube-system/storage-provisioner"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.118319   12553 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.118335   12553 projected.go:192] Error preparing data for projected volume kube-api-access-9jq8g for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.118375   12553 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/010ce802-e23e-4683-ae86-1fcc159d3679-kube-api-access-9jq8g podName:010ce802-e23e-4683-ae86-1fcc159d3679 nodeName:}" failed. No retries permitted until 2024-06-24 10:37:01.618357986 +0000 UTC m=+13.257267090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jq8g" (UniqueName: "kubernetes.io/projected/010ce802-e23e-4683-ae86-1fcc159d3679-kube-api-access-9jq8g") pod "storage-provisioner" (UID: "010ce802-e23e-4683-ae86-1fcc159d3679") : configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.150973   12553 topology_manager.go:200] "Topology Admit Handler"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.317460   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-lib-modules\") pod \"kube-proxy-lxgwf\" (UID: \"ea4ccd30-bee0-4598-9a5b-89a847f0aedf\") " pod="kube-system/kube-proxy-lxgwf"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.317542   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-xtables-lock\") pod \"kube-proxy-lxgwf\" (UID: \"ea4ccd30-bee0-4598-9a5b-89a847f0aedf\") " pod="kube-system/kube-proxy-lxgwf"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.317581   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gstb4\" (UniqueName: \"kubernetes.io/projected/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-kube-api-access-gstb4\") pod \"kube-proxy-lxgwf\" (UID: \"ea4ccd30-bee0-4598-9a5b-89a847f0aedf\") " pod="kube-system/kube-proxy-lxgwf"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.317602   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-kube-proxy\") pod \"kube-proxy-lxgwf\" (UID: \"ea4ccd30-bee0-4598-9a5b-89a847f0aedf\") " pod="kube-system/kube-proxy-lxgwf"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.421398   12553 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.421414   12553 projected.go:192] Error preparing data for projected volume kube-api-access-gstb4 for pod kube-system/kube-proxy-lxgwf: configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: E0624 10:37:01.421448   12553 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-kube-api-access-gstb4 podName:ea4ccd30-bee0-4598-9a5b-89a847f0aedf nodeName:}" failed. No retries permitted until 2024-06-24 10:37:01.921427966 +0000 UTC m=+13.560337070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gstb4" (UniqueName: "kubernetes.io/projected/ea4ccd30-bee0-4598-9a5b-89a847f0aedf-kube-api-access-gstb4") pod "kube-proxy-lxgwf" (UID: "ea4ccd30-bee0-4598-9a5b-89a847f0aedf") : configmap "kube-root-ca.crt" not found
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.852113   12553 topology_manager.go:200] "Topology Admit Handler"
	Jun 24 10:37:01 running-upgrade-398000 kubelet[12553]: I0624 10:37:01.861478   12553 topology_manager.go:200] "Topology Admit Handler"
	Jun 24 10:37:02 running-upgrade-398000 kubelet[12553]: I0624 10:37:02.022517   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpnk\" (UniqueName: \"kubernetes.io/projected/b88d5d8f-272f-4cc7-ac43-8b55f917108d-kube-api-access-qzpnk\") pod \"coredns-6d4b75cb6d-gsf8t\" (UID: \"b88d5d8f-272f-4cc7-ac43-8b55f917108d\") " pod="kube-system/coredns-6d4b75cb6d-gsf8t"
	Jun 24 10:37:02 running-upgrade-398000 kubelet[12553]: I0624 10:37:02.022559   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5f8\" (UniqueName: \"kubernetes.io/projected/079b093d-5e6e-49e9-870c-5bc360fed729-kube-api-access-ct5f8\") pod \"coredns-6d4b75cb6d-cfcwx\" (UID: \"079b093d-5e6e-49e9-870c-5bc360fed729\") " pod="kube-system/coredns-6d4b75cb6d-cfcwx"
	Jun 24 10:37:02 running-upgrade-398000 kubelet[12553]: I0624 10:37:02.022574   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/079b093d-5e6e-49e9-870c-5bc360fed729-config-volume\") pod \"coredns-6d4b75cb6d-cfcwx\" (UID: \"079b093d-5e6e-49e9-870c-5bc360fed729\") " pod="kube-system/coredns-6d4b75cb6d-cfcwx"
	Jun 24 10:37:02 running-upgrade-398000 kubelet[12553]: I0624 10:37:02.022585   12553 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88d5d8f-272f-4cc7-ac43-8b55f917108d-config-volume\") pod \"coredns-6d4b75cb6d-gsf8t\" (UID: \"b88d5d8f-272f-4cc7-ac43-8b55f917108d\") " pod="kube-system/coredns-6d4b75cb6d-gsf8t"
	Jun 24 10:37:02 running-upgrade-398000 kubelet[12553]: I0624 10:37:02.615988   12553 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f5a9ee77761014f4e3eba69486bc98745583f065682823a4b966499af1391999"
	Jun 24 10:40:49 running-upgrade-398000 kubelet[12553]: I0624 10:40:49.836025   12553 scope.go:110] "RemoveContainer" containerID="33271b9f8b21216b30cec41d71edbdc1e96efef0a7eb2a2ba0bb2642376e4b64"
	Jun 24 10:40:50 running-upgrade-398000 kubelet[12553]: I0624 10:40:50.856706   12553 scope.go:110] "RemoveContainer" containerID="10af503aede9e15f840231c1dc84ef9363ff2674fc9c1c301784e1c7775d34df"
	
	
	==> storage-provisioner [141da94e6c85] <==
	I0624 10:37:01.937978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0624 10:37:01.941472       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0624 10:37:01.941489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0624 10:37:01.944483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0624 10:37:01.944625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-398000_fc3aae3b-9502-4f3c-be14-aa4fae7c0e05!
	I0624 10:37:01.945063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b2af5b46-5479-4892-b650-d8ba91839278", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-398000_fc3aae3b-9502-4f3c-be14-aa4fae7c0e05 became leader
	I0624 10:37:02.045438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-398000_fc3aae3b-9502-4f3c-be14-aa4fae7c0e05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-398000 -n running-upgrade-398000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-398000 -n running-upgrade-398000: exit status 2 (15.679177292s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-398000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-398000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-398000
--- FAIL: TestRunningBinaryUpgrade (615.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (11.956662667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:30:40.228933    6737 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:30:40.229079    6737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:40.229082    6737 out.go:304] Setting ErrFile to fd 2...
	I0624 03:30:40.229084    6737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:40.229227    6737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:30:40.230637    6737 out.go:298] Setting JSON to false
	I0624 03:30:40.247959    6737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5410,"bootTime":1719219630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:30:40.248027    6737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:30:40.254425    6737 out.go:177] * [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:30:40.262498    6737 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:30:40.262607    6737 notify.go:220] Checking for updates...
	I0624 03:30:40.276295    6737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:30:40.279507    6737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:30:40.282472    6737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:30:40.286383    6737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:30:40.294518    6737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:30:40.298693    6737 config.go:182] Loaded profile config "NoKubernetes-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:30:40.298763    6737 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:30:40.298807    6737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:30:40.302432    6737 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:30:40.308505    6737 start.go:297] selected driver: qemu2
	I0624 03:30:40.308513    6737 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:30:40.308520    6737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:30:40.310646    6737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:30:40.314499    6737 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:30:40.318540    6737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:30:40.318562    6737 cni.go:84] Creating CNI manager for ""
	I0624 03:30:40.318569    6737 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:30:40.318601    6737 start.go:340] cluster config:
	{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:30:40.322593    6737 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:30:40.326426    6737 out.go:177] * Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	I0624 03:30:40.334495    6737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:30:40.334509    6737 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:30:40.334518    6737 cache.go:56] Caching tarball of preloaded images
	I0624 03:30:40.334574    6737 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:30:40.334579    6737 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:30:40.334646    6737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kubernetes-upgrade-786000/config.json ...
	I0624 03:30:40.334657    6737 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kubernetes-upgrade-786000/config.json: {Name:mk56f562a419b75915371a7e01c5f6872ef43660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:30:40.334946    6737 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:42.313542    6737 start.go:364] duration metric: took 1.978575542s to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0624 03:30:42.313766    6737 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:30:42.314044    6737 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:30:42.323626    6737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:30:42.373865    6737 start.go:159] libmachine.API.Create for "kubernetes-upgrade-786000" (driver="qemu2")
	I0624 03:30:42.373920    6737 client.go:168] LocalClient.Create starting
	I0624 03:30:42.374044    6737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:30:42.374104    6737 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:42.374122    6737 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:42.374191    6737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:30:42.374235    6737 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:42.374259    6737 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:42.374903    6737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:30:42.620697    6737 main.go:141] libmachine: Creating SSH key...
	I0624 03:30:42.668991    6737 main.go:141] libmachine: Creating Disk image...
	I0624 03:30:42.668996    6737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:30:42.669163    6737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:42.681818    6737 main.go:141] libmachine: STDOUT: 
	I0624 03:30:42.681840    6737 main.go:141] libmachine: STDERR: 
	I0624 03:30:42.681892    6737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2 +20000M
	I0624 03:30:42.692700    6737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:30:42.692714    6737 main.go:141] libmachine: STDERR: 
	I0624 03:30:42.692741    6737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:42.692749    6737 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:30:42.692780    6737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b9:99:69:e1:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:42.694436    6737 main.go:141] libmachine: STDOUT: 
	I0624 03:30:42.694450    6737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:42.694472    6737 client.go:171] duration metric: took 320.547667ms to LocalClient.Create
	I0624 03:30:44.696686    6737 start.go:128] duration metric: took 2.382463583s to createHost
	I0624 03:30:44.696759    6737 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 2.383158792s
	W0624 03:30:44.696825    6737 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:44.712542    6737 out.go:177] * Deleting "kubernetes-upgrade-786000" in qemu2 ...
	W0624 03:30:44.745082    6737 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:44.745120    6737 start.go:728] Will try again in 5 seconds ...
	I0624 03:30:49.747210    6737 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:49.757545    6737 start.go:364] duration metric: took 10.267167ms to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0624 03:30:49.757614    6737 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:30:49.757758    6737 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:30:49.766226    6737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:30:49.812245    6737 start.go:159] libmachine.API.Create for "kubernetes-upgrade-786000" (driver="qemu2")
	I0624 03:30:49.812302    6737 client.go:168] LocalClient.Create starting
	I0624 03:30:49.812389    6737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:30:49.812445    6737 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:49.812458    6737 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:49.812519    6737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:30:49.812556    6737 main.go:141] libmachine: Decoding PEM data...
	I0624 03:30:49.812577    6737 main.go:141] libmachine: Parsing certificate...
	I0624 03:30:49.813038    6737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:30:50.052559    6737 main.go:141] libmachine: Creating SSH key...
	I0624 03:30:50.083135    6737 main.go:141] libmachine: Creating Disk image...
	I0624 03:30:50.083141    6737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:30:50.083332    6737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:50.095762    6737 main.go:141] libmachine: STDOUT: 
	I0624 03:30:50.095783    6737 main.go:141] libmachine: STDERR: 
	I0624 03:30:50.095841    6737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2 +20000M
	I0624 03:30:50.106745    6737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:30:50.106759    6737 main.go:141] libmachine: STDERR: 
	I0624 03:30:50.106776    6737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:50.106791    6737 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:30:50.106823    6737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:14:2d:8d:45:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:50.108440    6737 main.go:141] libmachine: STDOUT: 
	I0624 03:30:50.108453    6737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:50.108466    6737 client.go:171] duration metric: took 296.161791ms to LocalClient.Create
	I0624 03:30:52.110625    6737 start.go:128] duration metric: took 2.35285475s to createHost
	I0624 03:30:52.110673    6737 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 2.353115584s
	W0624 03:30:52.110931    6737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:52.115690    6737 out.go:177] 
	W0624 03:30:52.130563    6737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:30:52.130591    6737 out.go:239] * 
	* 
	W0624 03:30:52.133209    6737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:30:52.141560    6737 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-786000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-786000: (1.853887041s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-786000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-786000 status --format={{.Host}}: exit status 7 (66.668542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182469792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:30:54.108475    6794 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:30:54.108603    6794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:54.108606    6794 out.go:304] Setting ErrFile to fd 2...
	I0624 03:30:54.108609    6794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:30:54.108728    6794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:30:54.109676    6794 out.go:298] Setting JSON to false
	I0624 03:30:54.125549    6794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5424,"bootTime":1719219630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:30:54.125613    6794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:30:54.130640    6794 out.go:177] * [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:30:54.138483    6794 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:30:54.138518    6794 notify.go:220] Checking for updates...
	I0624 03:30:54.145610    6794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:30:54.148578    6794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:30:54.151631    6794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:30:54.154645    6794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:30:54.156027    6794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:30:54.158816    6794 config.go:182] Loaded profile config "kubernetes-upgrade-786000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0624 03:30:54.159091    6794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:30:54.163598    6794 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:30:54.168598    6794 start.go:297] selected driver: qemu2
	I0624 03:30:54.168604    6794 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:30:54.168672    6794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:30:54.170851    6794 cni.go:84] Creating CNI manager for ""
	I0624 03:30:54.170868    6794 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:30:54.170896    6794 start.go:340] cluster config:
	{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-786000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:30:54.175067    6794 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:30:54.183587    6794 out.go:177] * Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	I0624 03:30:54.187614    6794 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:30:54.187630    6794 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:30:54.187638    6794 cache.go:56] Caching tarball of preloaded images
	I0624 03:30:54.187701    6794 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:30:54.187706    6794 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:30:54.187771    6794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kubernetes-upgrade-786000/config.json ...
	I0624 03:30:54.188239    6794 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:54.188269    6794 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0624 03:30:54.188277    6794 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:30:54.188285    6794 fix.go:54] fixHost starting: 
	I0624 03:30:54.188399    6794 fix.go:112] recreateIfNeeded on kubernetes-upgrade-786000: state=Stopped err=<nil>
	W0624 03:30:54.188408    6794 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:30:54.196620    6794 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	I0624 03:30:54.200650    6794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:14:2d:8d:45:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:54.202700    6794 main.go:141] libmachine: STDOUT: 
	I0624 03:30:54.202721    6794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:54.202755    6794 fix.go:56] duration metric: took 14.472125ms for fixHost
	I0624 03:30:54.202759    6794 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 14.486ms
	W0624 03:30:54.202766    6794 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:30:54.202796    6794 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:54.202801    6794 start.go:728] Will try again in 5 seconds ...
	I0624 03:30:59.204956    6794 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:30:59.205248    6794 start.go:364] duration metric: took 195.041µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0624 03:30:59.205296    6794 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:30:59.205308    6794 fix.go:54] fixHost starting: 
	I0624 03:30:59.205761    6794 fix.go:112] recreateIfNeeded on kubernetes-upgrade-786000: state=Stopped err=<nil>
	W0624 03:30:59.205778    6794 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:30:59.214288    6794 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	I0624 03:30:59.218496    6794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:14:2d:8d:45:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0624 03:30:59.223369    6794 main.go:141] libmachine: STDOUT: 
	I0624 03:30:59.223415    6794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:30:59.223490    6794 fix.go:56] duration metric: took 18.182875ms for fixHost
	I0624 03:30:59.223503    6794 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 18.239584ms
	W0624 03:30:59.223637    6794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:30:59.231348    6794 out.go:177] 
	W0624 03:30:59.234455    6794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:30:59.234492    6794 out.go:239] * 
	* 
	W0624 03:30:59.236910    6794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:30:59.246448    6794 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-786000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-786000 version --output=json: exit status 1 (63.680958ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-786000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-06-24 03:30:59.326893 -0700 PDT m=+733.796577876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-786000 -n kubernetes-upgrade-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-786000 -n kubernetes-upgrade-786000: exit status 7 (32.689875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-786000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-786000
--- FAIL: TestKubernetesUpgrade (19.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (12.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 : exit status 80 (12.121262292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-996000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-996000" primary control-plane node in "NoKubernetes-996000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-996000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-996000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000: exit status 7 (51.719041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-996000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (12.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 : exit status 80 (7.388816875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-996000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-996000
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-996000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000: exit status 7 (48.142375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-996000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 : exit status 80 (7.353611791s)

                                                
                                                
-- stdout --
	* [NoKubernetes-996000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-996000
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-996000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000: exit status 7 (32.754125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-996000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3592734074 start -p stopped-upgrade-252000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3592734074 start -p stopped-upgrade-252000 --memory=2200 --vm-driver=qemu2 : (51.472188208s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3592734074 -p stopped-upgrade-252000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3592734074 -p stopped-upgrade-252000 stop: (3.080045959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-252000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-252000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.645638166s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-252000" primary control-plane node in "stopped-upgrade-252000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-252000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:31:55.049951    6914 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:31:55.050078    6914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:31:55.050082    6914 out.go:304] Setting ErrFile to fd 2...
	I0624 03:31:55.050084    6914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:31:55.050214    6914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:31:55.051494    6914 out.go:298] Setting JSON to false
	I0624 03:31:55.071133    6914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5485,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:31:55.071211    6914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:31:55.075024    6914 out.go:177] * [stopped-upgrade-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:31:55.082971    6914 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:31:55.082972    6914 notify.go:220] Checking for updates...
	I0624 03:31:55.089906    6914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:31:55.092969    6914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:31:55.095926    6914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:31:55.098911    6914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:31:55.101938    6914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:31:55.105166    6914 config.go:182] Loaded profile config "stopped-upgrade-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:31:55.107866    6914 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0624 03:31:55.110907    6914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:31:55.113869    6914 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:31:55.120894    6914 start.go:297] selected driver: qemu2
	I0624 03:31:55.120903    6914 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:31:55.120960    6914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:31:55.123800    6914 cni.go:84] Creating CNI manager for ""
	I0624 03:31:55.123824    6914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:31:55.123846    6914 start.go:340] cluster config:
	{Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:31:55.123901    6914 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:31:55.131908    6914 out.go:177] * Starting "stopped-upgrade-252000" primary control-plane node in "stopped-upgrade-252000" cluster
	I0624 03:31:55.135888    6914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:31:55.135926    6914 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0624 03:31:55.135935    6914 cache.go:56] Caching tarball of preloaded images
	I0624 03:31:55.136037    6914 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:31:55.136046    6914 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0624 03:31:55.136105    6914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/config.json ...
	I0624 03:31:55.136479    6914 start.go:360] acquireMachinesLock for stopped-upgrade-252000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:31:55.136520    6914 start.go:364] duration metric: took 31.334µs to acquireMachinesLock for "stopped-upgrade-252000"
	I0624 03:31:55.136529    6914 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:31:55.136536    6914 fix.go:54] fixHost starting: 
	I0624 03:31:55.136654    6914 fix.go:112] recreateIfNeeded on stopped-upgrade-252000: state=Stopped err=<nil>
	W0624 03:31:55.136664    6914 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:31:55.139947    6914 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-252000" ...
	I0624 03:31:55.148013    6914 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51107-:22,hostfwd=tcp::51108-:2376,hostname=stopped-upgrade-252000 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/disk.qcow2
	I0624 03:31:55.200660    6914 main.go:141] libmachine: STDOUT: 
	I0624 03:31:55.200694    6914 main.go:141] libmachine: STDERR: 
	I0624 03:31:55.200700    6914 main.go:141] libmachine: Waiting for VM to start (ssh -p 51107 docker@127.0.0.1)...
	I0624 03:32:14.719614    6914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/config.json ...
	I0624 03:32:14.719943    6914 machine.go:94] provisionDockerMachine start ...
	I0624 03:32:14.720015    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.720277    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.720286    6914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:32:14.783313    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 03:32:14.783327    6914 buildroot.go:166] provisioning hostname "stopped-upgrade-252000"
	I0624 03:32:14.783381    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.783531    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.783538    6914 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-252000 && echo "stopped-upgrade-252000" | sudo tee /etc/hostname
	I0624 03:32:14.844972    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-252000
	
	I0624 03:32:14.845036    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:14.845166    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:14.845178    6914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-252000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-252000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-252000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:32:14.906871    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:32:14.906890    6914 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19124-4612/.minikube CaCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19124-4612/.minikube}
	I0624 03:32:14.906900    6914 buildroot.go:174] setting up certificates
	I0624 03:32:14.906905    6914 provision.go:84] configureAuth start
	I0624 03:32:14.906909    6914 provision.go:143] copyHostCerts
	I0624 03:32:14.907000    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem, removing ...
	I0624 03:32:14.907006    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem
	I0624 03:32:14.907118    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.pem (1082 bytes)
	I0624 03:32:14.907295    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem, removing ...
	I0624 03:32:14.907298    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem
	I0624 03:32:14.907337    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/cert.pem (1123 bytes)
	I0624 03:32:14.907439    6914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem, removing ...
	I0624 03:32:14.907442    6914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem
	I0624 03:32:14.907487    6914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19124-4612/.minikube/key.pem (1679 bytes)
	I0624 03:32:14.907577    6914 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-252000 san=[127.0.0.1 localhost minikube stopped-upgrade-252000]
	I0624 03:32:14.952653    6914 provision.go:177] copyRemoteCerts
	I0624 03:32:14.952681    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:32:14.952687    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:14.986509    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:32:14.993388    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0624 03:32:14.999836    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0624 03:32:15.006656    6914 provision.go:87] duration metric: took 99.745042ms to configureAuth
	I0624 03:32:15.006665    6914 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:32:15.006777    6914 config.go:182] Loaded profile config "stopped-upgrade-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:32:15.006811    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.006893    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.006897    6914 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:32:15.065024    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:32:15.065033    6914 buildroot.go:70] root file system type: tmpfs
	I0624 03:32:15.065100    6914 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:32:15.065144    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.065288    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.065321    6914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:32:15.128844    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:32:15.128889    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.129011    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.129020    6914 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:32:15.471994    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 03:32:15.472010    6914 machine.go:97] duration metric: took 752.064209ms to provisionDockerMachine
	I0624 03:32:15.472016    6914 start.go:293] postStartSetup for "stopped-upgrade-252000" (driver="qemu2")
	I0624 03:32:15.472022    6914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:32:15.472079    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:32:15.472090    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:15.506683    6914 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:32:15.507951    6914 info.go:137] Remote host: Buildroot 2021.02.12
	I0624 03:32:15.507959    6914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/addons for local assets ...
	I0624 03:32:15.508033    6914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19124-4612/.minikube/files for local assets ...
	I0624 03:32:15.508125    6914 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem -> 51362.pem in /etc/ssl/certs
	I0624 03:32:15.508226    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 03:32:15.511002    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:15.517593    6914 start.go:296] duration metric: took 45.572625ms for postStartSetup
	I0624 03:32:15.517609    6914 fix.go:56] duration metric: took 20.381250875s for fixHost
	I0624 03:32:15.517640    6914 main.go:141] libmachine: Using SSH client type: native
	I0624 03:32:15.517739    6914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d82900] 0x100d85160 <nil>  [] 0s} localhost 51107 <nil> <nil>}
	I0624 03:32:15.517743    6914 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0624 03:32:15.574829    6914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225135.306067545
	
	I0624 03:32:15.574838    6914 fix.go:216] guest clock: 1719225135.306067545
	I0624 03:32:15.574842    6914 fix.go:229] Guest: 2024-06-24 03:32:15.306067545 -0700 PDT Remote: 2024-06-24 03:32:15.517612 -0700 PDT m=+20.489537709 (delta=-211.544455ms)
	I0624 03:32:15.574852    6914 fix.go:200] guest clock delta is within tolerance: -211.544455ms
	I0624 03:32:15.574855    6914 start.go:83] releasing machines lock for "stopped-upgrade-252000", held for 20.438509417s
	I0624 03:32:15.574919    6914 ssh_runner.go:195] Run: cat /version.json
	I0624 03:32:15.574923    6914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:32:15.574927    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:32:15.574942    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	W0624 03:32:15.575597    6914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51290->127.0.0.1:51107: write: broken pipe
	I0624 03:32:15.575614    6914 retry.go:31] will retry after 166.471983ms: ssh: handshake failed: write tcp 127.0.0.1:51290->127.0.0.1:51107: write: broken pipe
	W0624 03:32:15.773148    6914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0624 03:32:15.773200    6914 ssh_runner.go:195] Run: systemctl --version
	I0624 03:32:15.774983    6914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 03:32:15.776797    6914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:32:15.776836    6914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0624 03:32:15.779940    6914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0624 03:32:15.784589    6914 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 03:32:15.784649    6914 start.go:494] detecting cgroup driver to use...
	I0624 03:32:15.784790    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:15.792347    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0624 03:32:15.795694    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:32:15.799059    6914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:32:15.799083    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:32:15.802444    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:15.805279    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:32:15.808184    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:32:15.811492    6914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:32:15.814664    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:32:15.817614    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:32:15.820305    6914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:32:15.823563    6914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:32:15.827115    6914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:32:15.830130    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:15.892646    6914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:32:15.903467    6914 start.go:494] detecting cgroup driver to use...
	I0624 03:32:15.903550    6914 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:32:15.911126    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:15.915953    6914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:32:15.921862    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:32:15.926459    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:15.931312    6914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 03:32:15.955212    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:32:15.960746    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:32:15.966537    6914 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:32:15.967605    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:32:15.970698    6914 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:32:15.975728    6914 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:32:16.041735    6914 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:32:16.109713    6914 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:32:16.109783    6914 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:32:16.115778    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:16.190608    6914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:17.316564    6914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.125949792s)
	I0624 03:32:17.316619    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 03:32:17.324072    6914 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0624 03:32:17.333012    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:17.337711    6914 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 03:32:17.398772    6914 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 03:32:17.463600    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:17.549330    6914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 03:32:17.554696    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:32:17.559138    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:17.623722    6914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 03:32:17.661541    6914 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 03:32:17.661617    6914 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 03:32:17.663634    6914 start.go:562] Will wait 60s for crictl version
	I0624 03:32:17.663678    6914 ssh_runner.go:195] Run: which crictl
	I0624 03:32:17.664881    6914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 03:32:17.679281    6914 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0624 03:32:17.679351    6914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:17.695995    6914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:32:17.714748    6914 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0624 03:32:17.714817    6914 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0624 03:32:17.716106    6914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:32:17.720108    6914 kubeadm.go:877] updating cluster {Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0624 03:32:17.720155    6914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0624 03:32:17.720195    6914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:17.732741    6914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:17.732759    6914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:17.732814    6914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:17.736446    6914 ssh_runner.go:195] Run: which lz4
	I0624 03:32:17.737912    6914 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0624 03:32:17.739271    6914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 03:32:17.739294    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0624 03:32:18.500026    6914 docker.go:649] duration metric: took 762.156167ms to copy over tarball
	I0624 03:32:18.500090    6914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 03:32:19.651728    6914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.151635042s)
	I0624 03:32:19.651741    6914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 03:32:19.667842    6914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:32:19.671372    6914 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0624 03:32:19.676729    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:19.757857    6914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:32:21.437481    6914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.679622541s)
	I0624 03:32:21.437572    6914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:32:21.452810    6914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:32:21.452826    6914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0624 03:32:21.452831    6914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0624 03:32:21.459128    6914 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0624 03:32:21.459156    6914 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:21.459183    6914 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:21.459190    6914 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:21.459220    6914 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:21.459234    6914 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:21.459251    6914 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:21.459354    6914 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:21.467578    6914 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:21.467727    6914 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:21.468272    6914 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:21.468904    6914 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:21.469014    6914 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:21.469062    6914 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0624 03:32:21.469105    6914 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:21.469125    6914 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	W0624 03:32:22.348868    6914 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:22.349064    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0624 03:32:22.349318    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.383735    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.392715    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.395839    6914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0624 03:32:22.395871    6914 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0624 03:32:22.395885    6914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0624 03:32:22.395909    6914 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.395923    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0624 03:32:22.395949    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0624 03:32:22.424019    6914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0624 03:32:22.424046    6914 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.424069    6914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0624 03:32:22.424084    6914 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.424107    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0624 03:32:22.424181    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0624 03:32:22.429587    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0624 03:32:22.429717    6914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:22.432794    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0624 03:32:22.432890    6914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0624 03:32:22.435403    6914 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0624 03:32:22.435504    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.450886    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0624 03:32:22.450906    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0624 03:32:22.450888    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0624 03:32:22.450922    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0624 03:32:22.450955    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0624 03:32:22.450967    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0624 03:32:22.470525    6914 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0624 03:32:22.470537    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0624 03:32:22.474619    6914 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0624 03:32:22.474639    6914 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.474697    6914 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:32:22.502868    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.504879    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.509405    6914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.523741    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0624 03:32:22.529206    6914 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0624 03:32:22.529222    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0624 03:32:22.530097    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0624 03:32:22.530212    6914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:22.537097    6914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0624 03:32:22.537117    6914 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.537175    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0624 03:32:22.551421    6914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0624 03:32:22.551442    6914 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.551502    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0624 03:32:22.551895    6914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0624 03:32:22.551905    6914 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.551928    6914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0624 03:32:22.603225    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0624 03:32:22.603264    6914 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0624 03:32:22.603292    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0624 03:32:22.603297    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0624 03:32:22.603333    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0624 03:32:22.603384    6914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0624 03:32:22.627591    6914 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0624 03:32:22.627609    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0624 03:32:22.864905    6914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0624 03:32:22.864944    6914 cache_images.go:92] duration metric: took 1.412118708s to LoadCachedImages
	W0624 03:32:22.864984    6914 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0624 03:32:22.864990    6914 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0624 03:32:22.865054    6914 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-252000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 03:32:22.865123    6914 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 03:32:22.878329    6914 cni.go:84] Creating CNI manager for ""
	I0624 03:32:22.878340    6914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:32:22.878358    6914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 03:32:22.878369    6914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-252000 NodeName:stopped-upgrade-252000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 03:32:22.878427    6914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-252000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 03:32:22.878494    6914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0624 03:32:22.881490    6914 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 03:32:22.881517    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 03:32:22.884092    6914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0624 03:32:22.889213    6914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 03:32:22.893841    6914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0624 03:32:22.899049    6914 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0624 03:32:22.900332    6914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:32:22.908120    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:32:22.988978    6914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:32:22.994248    6914 certs.go:68] Setting up /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000 for IP: 10.0.2.15
	I0624 03:32:22.994256    6914 certs.go:194] generating shared ca certs ...
	I0624 03:32:22.994264    6914 certs.go:226] acquiring lock for ca certs: {Name:mk1070bf28491713fa565ef6662c76d5a9260883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:22.994489    6914 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key
	I0624 03:32:22.994530    6914 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key
	I0624 03:32:22.994535    6914 certs.go:256] generating profile certs ...
	I0624 03:32:22.994593    6914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key
	I0624 03:32:22.994605    6914 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750
	I0624 03:32:22.994616    6914 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0624 03:32:23.111511    6914 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 ...
	I0624 03:32:23.111530    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750: {Name:mkbecaa613f108e08abc6698a40ff590b13932c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.111855    6914 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750 ...
	I0624 03:32:23.111861    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750: {Name:mk0aec9a80b71fdacc6fd00e84a498bf758d161c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.112001    6914 certs.go:381] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt.addb1750 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt
	I0624 03:32:23.112139    6914 certs.go:385] copying /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key.addb1750 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key
	I0624 03:32:23.112315    6914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.key
	I0624 03:32:23.112439    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem (1338 bytes)
	W0624 03:32:23.112460    6914 certs.go:480] ignoring /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136_empty.pem, impossibly tiny 0 bytes
	I0624 03:32:23.112465    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca-key.pem (1675 bytes)
	I0624 03:32:23.112485    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem (1082 bytes)
	I0624 03:32:23.112502    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem (1123 bytes)
	I0624 03:32:23.112521    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/key.pem (1679 bytes)
	I0624 03:32:23.112558    6914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem (1708 bytes)
	I0624 03:32:23.112924    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 03:32:23.119715    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 03:32:23.126603    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 03:32:23.133514    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 03:32:23.140152    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0624 03:32:23.147361    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0624 03:32:23.154677    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 03:32:23.161751    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0624 03:32:23.168154    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 03:32:23.175163    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/5136.pem --> /usr/share/ca-certificates/5136.pem (1338 bytes)
	I0624 03:32:23.181987    6914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/ssl/certs/51362.pem --> /usr/share/ca-certificates/51362.pem (1708 bytes)
	I0624 03:32:23.188429    6914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 03:32:23.194046    6914 ssh_runner.go:195] Run: openssl version
	I0624 03:32:23.195862    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 03:32:23.199433    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.201038    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.201056    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:32:23.202689    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 03:32:23.205485    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5136.pem && ln -fs /usr/share/ca-certificates/5136.pem /etc/ssl/certs/5136.pem"
	I0624 03:32:23.208367    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.209855    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:19 /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.209874    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5136.pem
	I0624 03:32:23.211619    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5136.pem /etc/ssl/certs/51391683.0"
	I0624 03:32:23.214966    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51362.pem && ln -fs /usr/share/ca-certificates/51362.pem /etc/ssl/certs/51362.pem"
	I0624 03:32:23.217939    6914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.219310    6914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:19 /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.219328    6914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51362.pem
	I0624 03:32:23.221189    6914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51362.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 03:32:23.224254    6914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 03:32:23.225872    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 03:32:23.227826    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 03:32:23.229665    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 03:32:23.231560    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 03:32:23.233502    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 03:32:23.235259    6914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 03:32:23.237151    6914 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0624 03:32:23.237224    6914 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:23.248237    6914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0624 03:32:23.251409    6914 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 03:32:23.251415    6914 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 03:32:23.251421    6914 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 03:32:23.251442    6914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 03:32:23.254508    6914 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 03:32:23.254554    6914 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-252000" does not appear in /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:32:23.254572    6914 kubeconfig.go:62] /Users/jenkins/minikube-integration/19124-4612/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-252000" cluster setting kubeconfig missing "stopped-upgrade-252000" context setting]
	I0624 03:32:23.254748    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:32:23.255391    6914 kapi.go:59] client config for stopped-upgrade-252000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10210ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:32:23.256225    6914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 03:32:23.258961    6914 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-252000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0624 03:32:23.258967    6914 kubeadm.go:1154] stopping kube-system containers ...
	I0624 03:32:23.259007    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:32:23.269397    6914 docker.go:483] Stopping containers: [f1ef49ef3795 05c0542721e3 f481d5c8ca3d 335d1abf4b16 bf89cebed9fa 54abcad50314 a77b085de8ed d0208fca4534]
	I0624 03:32:23.269461    6914 ssh_runner.go:195] Run: docker stop f1ef49ef3795 05c0542721e3 f481d5c8ca3d 335d1abf4b16 bf89cebed9fa 54abcad50314 a77b085de8ed d0208fca4534
	I0624 03:32:23.279646    6914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 03:32:23.285599    6914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:32:23.288262    6914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:32:23.288267    6914 kubeadm.go:156] found existing configuration files:
	
	I0624 03:32:23.288286    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf
	I0624 03:32:23.291294    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:32:23.291311    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:32:23.294134    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf
	I0624 03:32:23.296475    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:32:23.296490    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:32:23.299568    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf
	I0624 03:32:23.302471    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:32:23.302494    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:32:23.304903    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf
	I0624 03:32:23.307635    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:32:23.307653    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:32:23.310642    6914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:32:23.313222    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:23.335961    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:23.928951    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.037940    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.069487    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 03:32:24.092567    6914 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:32:24.092642    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:24.594431    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:25.094741    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:32:25.099377    6914 api_server.go:72] duration metric: took 1.006820167s to wait for apiserver process to appear ...
	I0624 03:32:25.099388    6914 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:32:25.099397    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:30.101464    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:30.101498    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:35.101649    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:35.101662    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:40.101903    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:40.101946    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:45.102681    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:45.102733    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:50.103385    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:50.103432    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:32:55.104292    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:32:55.104359    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:00.104852    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:00.104873    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:05.105985    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:05.106045    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:10.107018    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:10.107052    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:15.108829    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:15.108888    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:20.110063    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:20.110128    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:25.112550    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:25.112806    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:25.137621    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:25.137746    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:25.155040    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:25.155124    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:25.168606    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:25.168669    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:25.179813    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:25.179874    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:25.190443    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:25.190512    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:25.200977    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:25.201041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:25.214930    6914 logs.go:276] 0 containers: []
	W0624 03:33:25.214941    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:25.215006    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:25.225344    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:25.225360    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:25.225365    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:25.239817    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:25.239831    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:25.253235    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:25.253254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:25.267579    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:25.267593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:25.282118    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:25.282128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:25.296118    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:25.296128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:25.307230    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:25.307240    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:25.343899    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:25.343910    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:25.457147    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:25.457161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:25.483649    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:25.483659    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:25.496228    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:25.496239    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:25.513987    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:25.513997    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:25.525331    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:25.525342    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:25.529409    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:25.529414    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:25.543158    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:25.543167    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:25.554653    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:25.554664    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:25.565369    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:25.565380    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:28.091868    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:33.092884    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:33.093124    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:33.115816    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:33.115923    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:33.132413    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:33.132493    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:33.145195    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:33.145262    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:33.156809    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:33.156885    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:33.171049    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:33.171118    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:33.184349    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:33.184406    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:33.195355    6914 logs.go:276] 0 containers: []
	W0624 03:33:33.195369    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:33.195449    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:33.206067    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:33.206083    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:33.206088    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:33.233604    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:33.233616    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:33.250614    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:33.250625    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:33.262253    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:33.262263    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:33.274402    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:33.274411    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:33.286629    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:33.286638    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:33.323001    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:33.323014    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:33.327185    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:33.327194    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:33.365283    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:33.365294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:33.379053    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:33.379064    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:33.391091    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:33.391102    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:33.405230    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:33.405244    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:33.423835    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:33.423845    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:33.437864    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:33.437876    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:33.454635    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:33.454647    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:33.472352    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:33.472362    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:33.496803    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:33.496811    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:36.010745    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:41.013182    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:41.013482    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:41.045521    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:41.045648    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:41.064606    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:41.064703    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:41.078589    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:41.078668    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:41.090060    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:41.090142    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:41.101016    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:41.101080    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:41.113305    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:41.113366    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:41.123379    6914 logs.go:276] 0 containers: []
	W0624 03:33:41.123390    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:41.123446    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:41.134158    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:41.134207    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:41.134215    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:41.138236    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:41.138246    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:41.152675    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:41.152684    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:41.165478    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:41.165489    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:41.179232    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:41.179241    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:41.204249    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:41.204260    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:41.218588    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:41.218599    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:41.235924    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:41.235933    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:41.247337    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:41.247348    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:41.284870    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:41.284879    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:41.299368    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:41.299379    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:41.311161    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:41.311173    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:41.335727    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:41.335737    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:41.347540    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:41.347550    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:41.386129    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:41.386143    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:41.400797    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:41.400811    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:41.415040    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:41.415054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:43.929140    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:48.931514    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:48.931786    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:48.963952    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:48.964060    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:48.979053    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:48.979120    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:48.991640    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:48.991714    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:49.002736    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:49.002805    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:49.016511    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:49.016584    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:49.027043    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:49.027111    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:49.037431    6914 logs.go:276] 0 containers: []
	W0624 03:33:49.037441    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:49.037497    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:49.051637    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:49.051657    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:49.051663    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:49.074209    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:49.074220    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:49.088080    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:49.088091    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:49.101529    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:49.101540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:49.118578    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:49.118588    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:49.153519    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:49.153530    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:49.167782    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:49.167793    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:49.178779    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:49.178790    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:49.190872    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:49.190882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:49.208475    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:49.208485    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:49.231526    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:49.231535    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:49.235463    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:49.235469    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:49.269098    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:49.269109    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:49.283702    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:49.283712    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:49.298124    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:49.298135    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:49.309698    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:49.309709    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:49.324595    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:49.324605    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:51.864668    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:33:56.866951    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:33:56.867295    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:33:56.898171    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:33:56.898301    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:33:56.917359    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:33:56.917457    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:33:56.932517    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:33:56.932596    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:33:56.944564    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:33:56.944633    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:33:56.956487    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:33:56.956553    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:33:56.967189    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:33:56.967258    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:33:56.977389    6914 logs.go:276] 0 containers: []
	W0624 03:33:56.977405    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:33:56.977461    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:33:56.988217    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:33:56.988236    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:33:56.988242    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:33:57.012274    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:33:57.012282    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:33:57.049109    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:33:57.049126    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:33:57.075406    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:33:57.075415    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:33:57.089353    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:33:57.089363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:33:57.106863    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:33:57.106878    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:33:57.118022    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:33:57.118035    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:33:57.152923    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:33:57.152939    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:33:57.167451    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:33:57.167461    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:33:57.181243    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:33:57.181258    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:33:57.192893    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:33:57.192909    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:33:57.207049    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:33:57.207059    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:33:57.219353    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:33:57.219363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:33:57.231168    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:33:57.231178    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:33:57.235397    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:33:57.235403    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:33:57.257978    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:33:57.257991    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:33:57.269346    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:33:57.269357    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:33:59.782330    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:04.784568    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:04.784661    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:04.795393    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:04.795464    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:04.806188    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:04.806266    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:04.826533    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:04.826604    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:04.836962    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:04.837031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:04.847686    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:04.847756    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:04.858084    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:04.858153    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:04.871279    6914 logs.go:276] 0 containers: []
	W0624 03:34:04.871290    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:04.871347    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:04.881422    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:04.881439    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:04.881444    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:04.916209    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:04.916220    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:04.930717    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:04.930728    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:04.945293    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:04.945303    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:04.956742    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:04.956753    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:04.968639    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:04.968652    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:04.980830    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:04.980841    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:04.992765    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:04.992776    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:05.029982    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:05.029992    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:05.044520    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:05.044532    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:05.071709    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:05.071719    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:05.082463    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:05.082473    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:05.099794    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:05.099804    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:05.113625    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:05.113635    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:05.117799    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:05.117807    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:05.131826    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:05.131836    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:05.143429    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:05.143440    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:07.670648    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:12.672932    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:12.673041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:12.690151    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:12.690220    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:12.699988    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:12.700056    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:12.715647    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:12.715706    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:12.726015    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:12.726079    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:12.736034    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:12.736098    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:12.746978    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:12.747039    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:12.761400    6914 logs.go:276] 0 containers: []
	W0624 03:34:12.761414    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:12.761466    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:12.771831    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:12.771849    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:12.771855    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:12.783579    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:12.783590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:12.797564    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:12.797576    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:12.808489    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:12.808500    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:12.819882    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:12.819893    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:12.824102    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:12.824112    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:12.835721    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:12.835731    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:12.861236    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:12.861247    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:12.880000    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:12.880011    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:12.892125    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:12.892135    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:12.903166    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:12.903177    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:12.938922    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:12.938934    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:12.953412    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:12.953427    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:12.969454    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:12.969463    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:12.984442    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:12.984453    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:13.001404    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:13.001412    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:13.025684    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:13.025700    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:15.565865    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:20.567640    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:20.567851    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:20.590164    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:20.590282    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:20.604348    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:20.604422    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:20.616134    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:20.616206    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:20.626831    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:20.626901    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:20.638878    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:20.638943    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:20.649944    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:20.650021    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:20.660275    6914 logs.go:276] 0 containers: []
	W0624 03:34:20.660287    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:20.660344    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:20.671019    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:20.671034    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:20.671041    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:20.683805    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:20.683819    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:20.707394    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:20.707405    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:20.718867    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:20.718882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:20.736903    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:20.736917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:20.751704    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:20.751717    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:20.765615    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:20.765630    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:20.777392    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:20.777401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:20.791425    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:20.791436    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:20.805389    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:20.805400    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:20.816742    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:20.816752    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:20.841176    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:20.841187    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:20.855098    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:20.855107    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:20.891393    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:20.891405    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:20.909430    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:20.909445    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:20.922346    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:20.922358    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:20.961546    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:20.961556    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:23.467831    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:28.470570    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:28.471103    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:28.509386    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:28.509547    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:28.530819    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:28.530929    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:28.546917    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:28.546998    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:28.565398    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:28.565466    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:28.576225    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:28.576297    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:28.586804    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:28.586868    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:28.596668    6914 logs.go:276] 0 containers: []
	W0624 03:34:28.596680    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:28.596740    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:28.607483    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:28.607499    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:28.607504    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:28.622303    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:28.622313    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:28.637029    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:28.637040    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:28.652960    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:28.652972    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:28.667005    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:28.667015    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:28.701448    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:28.701459    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:28.726908    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:28.726917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:28.752838    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:28.752853    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:28.767499    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:28.767508    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:28.790504    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:28.790515    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:28.815178    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:28.815185    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:28.819521    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:28.819528    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:28.830979    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:28.830992    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:28.842375    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:28.842385    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:28.854421    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:28.854433    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:28.868568    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:28.868578    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:28.886012    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:28.886022    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:31.424863    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:36.427272    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:36.427580    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:36.462496    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:36.462625    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:36.481274    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:36.481371    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:36.494807    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:36.494883    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:36.506611    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:36.506678    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:36.516972    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:36.517031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:36.527358    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:36.527432    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:36.538058    6914 logs.go:276] 0 containers: []
	W0624 03:34:36.538071    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:36.538132    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:36.548271    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:36.548288    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:36.548293    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:36.562454    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:36.562464    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:36.574135    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:36.574146    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:36.586262    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:36.586276    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:36.600265    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:36.600278    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:36.612154    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:36.612168    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:36.636277    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:36.636285    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:36.640690    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:36.640697    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:36.654205    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:36.654214    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:36.688685    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:36.688698    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:36.714415    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:36.714428    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:36.728281    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:36.728294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:36.740004    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:36.740015    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:36.751262    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:36.751274    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:36.788659    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:36.788668    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:36.805723    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:36.805736    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:36.818162    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:36.818176    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:39.334850    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:44.337185    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:44.337369    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:44.363307    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:44.363399    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:44.375314    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:44.375389    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:44.387357    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:44.387427    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:44.397725    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:44.397798    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:44.412565    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:44.412631    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:44.423061    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:44.423127    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:44.433248    6914 logs.go:276] 0 containers: []
	W0624 03:34:44.433259    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:44.433316    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:44.443615    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:44.443632    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:44.443637    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:44.458300    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:44.458310    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:44.472659    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:44.472669    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:44.484976    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:44.484987    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:44.499086    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:44.499101    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:44.513607    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:44.513621    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:44.532783    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:44.532798    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:44.544348    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:44.544362    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:44.559603    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:44.559617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:44.571092    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:44.571104    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:44.607865    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:44.607874    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:44.642385    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:44.642396    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:44.656582    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:44.656593    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:44.660731    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:44.660741    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:44.685505    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:44.685519    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:44.701059    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:44.701074    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:44.718531    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:44.718545    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:47.245143    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:34:52.247472    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:34:52.247667    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:34:52.269341    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:34:52.269442    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:34:52.286522    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:34:52.286603    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:34:52.304918    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:34:52.304988    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:34:52.315046    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:34:52.315110    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:34:52.325046    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:34:52.325105    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:34:52.335313    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:34:52.335385    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:34:52.345339    6914 logs.go:276] 0 containers: []
	W0624 03:34:52.345354    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:34:52.345414    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:34:52.356170    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:34:52.356188    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:34:52.356193    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:34:52.368720    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:34:52.368733    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:34:52.379786    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:34:52.379798    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:34:52.420651    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:34:52.420662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:34:52.445572    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:34:52.445583    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:34:52.459529    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:34:52.459540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:34:52.473387    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:34:52.473398    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:34:52.511403    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:34:52.511411    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:34:52.528931    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:34:52.528942    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:34:52.543769    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:34:52.543780    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:34:52.555293    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:34:52.555306    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:34:52.567438    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:34:52.567449    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:34:52.581430    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:34:52.581440    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:34:52.596259    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:34:52.596269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:34:52.608111    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:34:52.608121    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:34:52.619395    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:34:52.619405    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:34:52.644387    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:34:52.644431    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:34:55.150766    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:00.153031    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:00.153134    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:00.169399    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:00.169470    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:00.180088    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:00.180168    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:00.190847    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:00.190910    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:00.201193    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:00.201265    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:00.211635    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:00.211707    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:00.222232    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:00.222296    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:00.236695    6914 logs.go:276] 0 containers: []
	W0624 03:35:00.236707    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:00.236764    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:00.247094    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:00.247111    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:00.247116    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:00.271931    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:00.271941    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:00.286157    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:00.286168    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:00.300051    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:00.300062    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:00.322559    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:00.322570    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:00.335980    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:00.335989    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:00.347004    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:00.347015    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:00.351182    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:00.351191    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:00.390014    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:00.390024    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:00.404781    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:00.404791    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:00.418045    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:00.418054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:00.429647    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:00.429657    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:00.452813    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:00.452821    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:00.489383    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:00.489401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:00.500299    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:00.500312    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:00.516132    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:00.516142    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:00.528447    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:00.528458    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:03.042183    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:08.044520    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:08.044889    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:08.087722    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:08.087813    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:08.105356    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:08.105435    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:08.119283    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:08.119354    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:08.131119    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:08.131191    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:08.142245    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:08.142305    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:08.153613    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:08.153680    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:08.164367    6914 logs.go:276] 0 containers: []
	W0624 03:35:08.164381    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:08.164443    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:08.180532    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:08.180549    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:08.180554    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:08.218172    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:08.218180    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:08.233769    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:08.233779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:08.245723    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:08.245734    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:08.259610    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:08.259622    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:08.270603    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:08.270614    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:08.282579    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:08.282590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:08.301269    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:08.301279    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:08.326184    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:08.326194    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:08.340277    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:08.340289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:08.359929    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:08.359939    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:08.375179    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:08.375190    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:08.398574    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:08.398581    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:08.402828    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:08.402834    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:08.437667    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:08.437678    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:08.451992    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:08.452007    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:08.464915    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:08.464931    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:10.982335    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:15.983802    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:15.984156    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:16.021323    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:16.021458    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:16.041545    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:16.041644    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:16.060669    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:16.060737    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:16.073356    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:16.073430    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:16.084221    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:16.084293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:16.095330    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:16.095402    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:16.106177    6914 logs.go:276] 0 containers: []
	W0624 03:35:16.106191    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:16.106249    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:16.117280    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:16.117299    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:16.117305    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:16.140082    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:16.140095    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:16.152898    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:16.152909    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:16.164342    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:16.164353    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:16.178499    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:16.178511    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:16.190719    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:16.190730    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:16.203135    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:16.203145    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:16.220695    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:16.220707    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:16.257843    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:16.257851    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:16.293174    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:16.293185    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:16.308786    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:16.308796    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:16.332872    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:16.332881    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:16.344639    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:16.344650    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:16.359373    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:16.359384    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:16.383766    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:16.383776    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:16.401029    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:16.401038    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:16.405708    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:16.405713    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:18.917559    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:23.919875    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:23.920043    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:23.936138    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:23.936225    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:23.948886    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:23.948964    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:23.960124    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:23.960194    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:23.970747    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:23.970812    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:23.981207    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:23.981268    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:23.992482    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:23.992554    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:24.002864    6914 logs.go:276] 0 containers: []
	W0624 03:35:24.002876    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:24.002935    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:24.013530    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:24.013546    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:24.013552    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:24.053102    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:24.053110    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:24.066906    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:24.066917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:24.080652    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:24.080662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:24.094873    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:24.094887    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:24.106401    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:24.106415    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:24.120615    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:24.120630    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:24.125211    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:24.125217    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:24.149701    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:24.149716    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:24.163965    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:24.163979    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:24.185421    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:24.185434    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:24.210724    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:24.210744    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:24.224100    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:24.224113    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:24.236637    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:24.236649    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:24.272121    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:24.272136    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:24.286614    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:24.286628    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:24.301426    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:24.301440    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:26.826708    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:31.829441    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:31.829629    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:31.851054    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:31.851154    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:31.866913    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:31.866987    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:31.879152    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:31.879217    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:31.889978    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:31.890046    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:31.900459    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:31.900520    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:31.911751    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:31.911818    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:31.929330    6914 logs.go:276] 0 containers: []
	W0624 03:35:31.929341    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:31.929390    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:31.939795    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:31.939815    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:31.939821    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:31.954907    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:31.954917    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:31.971802    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:31.971811    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:31.995313    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:31.995320    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:32.007767    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:32.007777    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:32.048390    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:32.048401    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:32.082730    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:32.082740    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:32.098170    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:32.098183    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:32.109561    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:32.109575    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:32.113752    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:32.113758    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:32.127849    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:32.127866    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:32.141413    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:32.141423    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:32.152738    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:32.152749    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:32.166848    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:32.166863    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:32.178459    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:32.178472    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:32.215551    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:32.215565    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:32.229391    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:32.229404    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:34.746714    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:39.749338    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:39.749525    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:39.766979    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:39.767060    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:39.780896    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:39.780975    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:39.792161    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:39.792218    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:39.802887    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:39.802958    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:39.812628    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:39.812688    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:39.824771    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:39.824841    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:39.834976    6914 logs.go:276] 0 containers: []
	W0624 03:35:39.834988    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:39.835041    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:39.848748    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:39.848763    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:39.848769    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:39.863669    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:39.863680    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:39.877280    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:39.877291    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:39.888423    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:39.888435    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:39.900350    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:39.900363    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:39.926199    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:39.926211    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:39.945978    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:39.945989    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:39.980459    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:39.980471    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:39.992425    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:39.992435    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:40.009235    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:40.009245    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:40.046217    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:40.046229    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:40.050174    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:40.050182    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:40.064103    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:40.064113    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:40.075967    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:40.075979    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:40.088208    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:40.088221    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:40.111258    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:40.111269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:40.126067    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:40.126076    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:42.638895    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:47.641169    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:47.641398    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:47.665677    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:47.665789    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:47.682394    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:47.682473    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:47.695016    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:47.695079    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:47.707453    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:47.707526    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:47.718916    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:47.718979    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:47.729402    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:47.729464    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:47.739788    6914 logs.go:276] 0 containers: []
	W0624 03:35:47.739799    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:47.739852    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:47.750710    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:47.750731    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:47.750736    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:47.765311    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:47.765322    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:47.776580    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:47.776591    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:47.788576    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:47.788586    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:47.825626    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:47.825640    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:47.850425    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:47.850442    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:47.863820    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:47.863832    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:47.878598    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:47.878609    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:47.893224    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:47.893239    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:47.905075    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:47.905087    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:47.909288    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:47.909294    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:47.920235    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:47.920246    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:47.933991    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:47.934002    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:47.946022    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:47.946032    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:47.963227    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:47.963237    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:47.986860    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:47.986867    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:48.024545    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:48.024555    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:50.540760    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:35:55.543084    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:35:55.543444    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:35:55.581434    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:35:55.581574    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:35:55.604164    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:35:55.604291    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:35:55.619727    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:35:55.619803    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:35:55.636843    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:35:55.636912    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:35:55.649032    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:35:55.649097    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:35:55.659450    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:35:55.659511    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:35:55.669596    6914 logs.go:276] 0 containers: []
	W0624 03:35:55.669612    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:35:55.669664    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:35:55.680197    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:35:55.680214    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:35:55.680220    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:35:55.714630    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:35:55.714642    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:35:55.726852    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:35:55.726862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:35:55.741034    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:35:55.741045    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:35:55.777600    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:35:55.777609    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:35:55.791339    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:35:55.791350    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:35:55.805633    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:35:55.805643    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:35:55.817052    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:35:55.817066    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:35:55.836719    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:35:55.836731    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:35:55.867313    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:35:55.867324    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:35:55.881126    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:35:55.881136    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:35:55.899536    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:35:55.899547    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:35:55.914178    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:35:55.914188    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:35:55.937843    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:35:55.937850    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:35:55.942504    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:35:55.942510    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:35:55.956911    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:35:55.956920    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:35:55.968308    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:35:55.968318    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:35:58.481474    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:03.483986    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:03.484269    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:03.511092    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:03.511222    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:03.529166    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:03.529256    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:03.546036    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:03.546116    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:03.560564    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:03.560645    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:03.571816    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:03.571883    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:03.586385    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:03.586519    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:03.596752    6914 logs.go:276] 0 containers: []
	W0624 03:36:03.596762    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:03.596813    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:03.606953    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:03.606969    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:03.606974    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:03.642538    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:03.642548    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:03.657677    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:03.657691    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:03.671846    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:03.671859    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:03.683855    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:03.683864    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:03.695545    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:03.695560    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:03.717384    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:03.717393    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:03.756564    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:03.756571    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:03.779438    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:03.779445    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:03.791889    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:03.791904    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:03.806065    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:03.806079    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:03.833297    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:03.833311    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:03.844548    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:03.844559    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:03.848501    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:03.848507    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:03.860083    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:03.860095    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:03.874134    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:03.874148    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:03.885790    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:03.885801    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:06.402063    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:11.404738    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:11.405019    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:11.440907    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:11.441033    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:11.458429    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:11.458521    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:11.471651    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:11.471729    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:11.484086    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:11.484156    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:11.494997    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:11.495070    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:11.505340    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:11.505411    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:11.530576    6914 logs.go:276] 0 containers: []
	W0624 03:36:11.530589    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:11.530650    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:11.558265    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:11.558286    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:11.558291    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:11.570569    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:11.570580    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:11.574894    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:11.574899    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:11.586447    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:11.586458    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:11.598531    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:11.598540    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:11.616424    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:11.616433    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:11.627986    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:11.627997    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:11.650438    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:11.650449    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:11.687355    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:11.687367    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:11.712239    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:11.712249    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:11.723902    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:11.723916    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:11.760708    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:11.760716    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:11.775613    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:11.775624    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:11.789927    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:11.789936    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:11.804216    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:11.804227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:11.818180    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:11.818194    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:11.836903    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:11.836912    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:14.349446    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:19.351821    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:19.352058    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:36:19.368708    6914 logs.go:276] 2 containers: [b53cea1f3082 54abcad50314]
	I0624 03:36:19.368796    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:36:19.381836    6914 logs.go:276] 2 containers: [8e73714d3034 f481d5c8ca3d]
	I0624 03:36:19.381904    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:36:19.393397    6914 logs.go:276] 1 containers: [212a33e1a362]
	I0624 03:36:19.393469    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:36:19.403597    6914 logs.go:276] 2 containers: [a480ab9b5fe0 f1ef49ef3795]
	I0624 03:36:19.403660    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:36:19.423126    6914 logs.go:276] 1 containers: [5d093b669de4]
	I0624 03:36:19.423197    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:36:19.444523    6914 logs.go:276] 2 containers: [d3cd5aa3869c bf89cebed9fa]
	I0624 03:36:19.444587    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:36:19.456147    6914 logs.go:276] 0 containers: []
	W0624 03:36:19.456158    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:36:19.456212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:36:19.466545    6914 logs.go:276] 2 containers: [254755bcc869 66368496c3be]
	I0624 03:36:19.466564    6914 logs.go:123] Gathering logs for storage-provisioner [66368496c3be] ...
	I0624 03:36:19.466569    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66368496c3be"
	I0624 03:36:19.477769    6914 logs.go:123] Gathering logs for kube-apiserver [54abcad50314] ...
	I0624 03:36:19.477779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54abcad50314"
	I0624 03:36:19.501465    6914 logs.go:123] Gathering logs for kube-scheduler [f1ef49ef3795] ...
	I0624 03:36:19.501477    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ef49ef3795"
	I0624 03:36:19.513655    6914 logs.go:123] Gathering logs for kube-proxy [5d093b669de4] ...
	I0624 03:36:19.513665    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d093b669de4"
	I0624 03:36:19.525342    6914 logs.go:123] Gathering logs for storage-provisioner [254755bcc869] ...
	I0624 03:36:19.525353    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254755bcc869"
	I0624 03:36:19.536988    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:36:19.536998    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:36:19.571054    6914 logs.go:123] Gathering logs for etcd [f481d5c8ca3d] ...
	I0624 03:36:19.571065    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f481d5c8ca3d"
	I0624 03:36:19.587972    6914 logs.go:123] Gathering logs for kube-scheduler [a480ab9b5fe0] ...
	I0624 03:36:19.587983    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a480ab9b5fe0"
	I0624 03:36:19.604836    6914 logs.go:123] Gathering logs for kube-controller-manager [d3cd5aa3869c] ...
	I0624 03:36:19.604846    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cd5aa3869c"
	I0624 03:36:19.622053    6914 logs.go:123] Gathering logs for kube-controller-manager [bf89cebed9fa] ...
	I0624 03:36:19.622063    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf89cebed9fa"
	I0624 03:36:19.643410    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:36:19.643421    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:36:19.665735    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:36:19.665742    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:36:19.670322    6914 logs.go:123] Gathering logs for etcd [8e73714d3034] ...
	I0624 03:36:19.670329    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e73714d3034"
	I0624 03:36:19.684259    6914 logs.go:123] Gathering logs for coredns [212a33e1a362] ...
	I0624 03:36:19.684269    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212a33e1a362"
	I0624 03:36:19.695645    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:36:19.695655    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:36:19.707934    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:36:19.707944    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:36:19.745596    6914 logs.go:123] Gathering logs for kube-apiserver [b53cea1f3082] ...
	I0624 03:36:19.745603    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53cea1f3082"
	I0624 03:36:22.264651    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:27.267039    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:27.267150    6914 kubeadm.go:591] duration metric: took 4m4.017851s to restartPrimaryControlPlane
	W0624 03:36:27.267218    6914 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0624 03:36:27.267251    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0624 03:36:28.302722    6914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.035467375s)
	I0624 03:36:28.302777    6914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 03:36:28.307653    6914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:36:28.310357    6914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:36:28.312978    6914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:36:28.312984    6914 kubeadm.go:156] found existing configuration files:
	
	I0624 03:36:28.313002    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf
	I0624 03:36:28.315657    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:36:28.315682    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:36:28.318053    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf
	I0624 03:36:28.320744    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:36:28.320765    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:36:28.323780    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf
	I0624 03:36:28.326046    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:36:28.326068    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:36:28.328732    6914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf
	I0624 03:36:28.331475    6914 kubeadm.go:162] "https://control-plane.minikube.internal:51139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:36:28.331498    6914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:36:28.333971    6914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 03:36:28.350783    6914 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0624 03:36:28.350820    6914 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 03:36:28.398345    6914 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 03:36:28.398403    6914 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 03:36:28.398459    6914 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 03:36:28.446777    6914 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 03:36:28.451001    6914 out.go:204]   - Generating certificates and keys ...
	I0624 03:36:28.451035    6914 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 03:36:28.451068    6914 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 03:36:28.451103    6914 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 03:36:28.451137    6914 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0624 03:36:28.451171    6914 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0624 03:36:28.451198    6914 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0624 03:36:28.451233    6914 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0624 03:36:28.451263    6914 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0624 03:36:28.451298    6914 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 03:36:28.451334    6914 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 03:36:28.451351    6914 kubeadm.go:309] [certs] Using the existing "sa" key
	I0624 03:36:28.451418    6914 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 03:36:28.731261    6914 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 03:36:28.851690    6914 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 03:36:28.905038    6914 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 03:36:28.991419    6914 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 03:36:29.025663    6914 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 03:36:29.025722    6914 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 03:36:29.025757    6914 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 03:36:29.090464    6914 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 03:36:29.094686    6914 out.go:204]   - Booting up control plane ...
	I0624 03:36:29.094728    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 03:36:29.094791    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 03:36:29.094824    6914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 03:36:29.094861    6914 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 03:36:29.094941    6914 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0624 03:36:33.594998    6914 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505130 seconds
	I0624 03:36:33.595052    6914 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 03:36:33.600695    6914 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 03:36:34.112138    6914 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 03:36:34.112332    6914 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-252000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 03:36:34.615982    6914 kubeadm.go:309] [bootstrap-token] Using token: ig8u9t.o8ynutmdor6z293i
	I0624 03:36:34.622488    6914 out.go:204]   - Configuring RBAC rules ...
	I0624 03:36:34.622552    6914 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 03:36:34.622605    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 03:36:34.627054    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 03:36:34.628028    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 03:36:34.628821    6914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 03:36:34.629714    6914 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 03:36:34.637512    6914 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 03:36:34.812002    6914 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 03:36:35.019817    6914 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 03:36:35.020344    6914 kubeadm.go:309] 
	I0624 03:36:35.020379    6914 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 03:36:35.020383    6914 kubeadm.go:309] 
	I0624 03:36:35.020429    6914 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 03:36:35.020436    6914 kubeadm.go:309] 
	I0624 03:36:35.020451    6914 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 03:36:35.020480    6914 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 03:36:35.020506    6914 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 03:36:35.020510    6914 kubeadm.go:309] 
	I0624 03:36:35.020540    6914 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 03:36:35.020544    6914 kubeadm.go:309] 
	I0624 03:36:35.020570    6914 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 03:36:35.020574    6914 kubeadm.go:309] 
	I0624 03:36:35.020601    6914 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 03:36:35.020636    6914 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 03:36:35.020670    6914 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 03:36:35.020674    6914 kubeadm.go:309] 
	I0624 03:36:35.020714    6914 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 03:36:35.020755    6914 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 03:36:35.020758    6914 kubeadm.go:309] 
	I0624 03:36:35.020801    6914 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ig8u9t.o8ynutmdor6z293i \
	I0624 03:36:35.020865    6914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 \
	I0624 03:36:35.020875    6914 kubeadm.go:309] 	--control-plane 
	I0624 03:36:35.020879    6914 kubeadm.go:309] 
	I0624 03:36:35.020924    6914 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 03:36:35.020930    6914 kubeadm.go:309] 
	I0624 03:36:35.020976    6914 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ig8u9t.o8ynutmdor6z293i \
	I0624 03:36:35.021030    6914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:589c6e059dc550e81a31ec2e6905c9b09b436fd51ac2d4f41e53b0889a4a68c0 
	I0624 03:36:35.021262    6914 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 03:36:35.021275    6914 cni.go:84] Creating CNI manager for ""
	I0624 03:36:35.021285    6914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:36:35.028185    6914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0624 03:36:35.031376    6914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0624 03:36:35.034169    6914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0624 03:36:35.040766    6914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 03:36:35.040817    6914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-252000 minikube.k8s.io/updated_at=2024_06_24T03_36_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=stopped-upgrade-252000 minikube.k8s.io/primary=true
	I0624 03:36:35.040818    6914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:36:35.075449    6914 kubeadm.go:1107] duration metric: took 34.674666ms to wait for elevateKubeSystemPrivileges
	I0624 03:36:35.080302    6914 ops.go:34] apiserver oom_adj: -16
	W0624 03:36:35.080323    6914 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 03:36:35.080329    6914 kubeadm.go:393] duration metric: took 4m11.845379041s to StartCluster
	I0624 03:36:35.080339    6914 settings.go:142] acquiring lock: {Name:mk350ce6fa96c4a87ff2b5575a8be101ddfe67cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:35.080508    6914 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:36:35.080884    6914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/kubeconfig: {Name:mkbbeb070f5681b596c7409cd66efdb520d422d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:36:35.081101    6914 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:36:35.081120    6914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 03:36:35.081160    6914 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-252000"
	I0624 03:36:35.081172    6914 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-252000"
	I0624 03:36:35.081172    6914 config.go:182] Loaded profile config "stopped-upgrade-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0624 03:36:35.081173    6914 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-252000"
	W0624 03:36:35.081191    6914 addons.go:243] addon storage-provisioner should already be in state true
	I0624 03:36:35.081186    6914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-252000"
	I0624 03:36:35.081203    6914 host.go:66] Checking if "stopped-upgrade-252000" exists ...
	I0624 03:36:35.085098    6914 out.go:177] * Verifying Kubernetes components...
	I0624 03:36:35.085840    6914 kapi.go:59] client config for stopped-upgrade-252000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/stopped-upgrade-252000/client.key", CAFile:"/Users/jenkins/minikube-integration/19124-4612/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10210ed80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 03:36:35.089667    6914 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-252000"
	W0624 03:36:35.089672    6914 addons.go:243] addon default-storageclass should already be in state true
	I0624 03:36:35.089678    6914 host.go:66] Checking if "stopped-upgrade-252000" exists ...
	I0624 03:36:35.090284    6914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:35.090289    6914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 03:36:35.090297    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:36:35.093243    6914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:36:35.096251    6914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:36:35.099176    6914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:35.099182    6914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 03:36:35.099187    6914 sshutil.go:53] new ssh client: &{IP:localhost Port:51107 SSHKeyPath:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/stopped-upgrade-252000/id_rsa Username:docker}
	I0624 03:36:35.166205    6914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:36:35.171721    6914 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:36:35.171761    6914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:36:35.175450    6914 api_server.go:72] duration metric: took 94.338959ms to wait for apiserver process to appear ...
	I0624 03:36:35.175458    6914 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:36:35.175465    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:35.215539    6914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 03:36:35.242456    6914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:36:40.177615    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:40.177694    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:45.178499    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:45.178520    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:50.178945    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:50.178967    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:36:55.179531    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:36:55.179571    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:00.180318    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:00.180354    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:05.181285    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:05.181324    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0624 03:37:05.568163    6914 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0624 03:37:05.574505    6914 out.go:177] * Enabled addons: storage-provisioner
	I0624 03:37:05.584375    6914 addons.go:510] duration metric: took 30.503520709s for enable addons: enabled=[storage-provisioner]
	I0624 03:37:10.182554    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:10.182585    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:15.184138    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:15.184179    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:20.186135    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:20.186171    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:25.188348    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:25.188386    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:30.190594    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:30.190615    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:35.190903    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:35.191065    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:35.201372    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:35.201446    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:35.211971    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:35.212039    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:35.222233    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:35.222297    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:35.232204    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:35.232273    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:35.242491    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:35.242559    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:35.252693    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:35.252761    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:35.263157    6914 logs.go:276] 0 containers: []
	W0624 03:37:35.263168    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:35.263224    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:35.273266    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:35.273288    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:35.273294    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:35.277723    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:35.277733    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:35.311883    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:35.311899    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:35.326105    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:35.326117    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:35.337752    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:35.337764    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:35.349491    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:35.349502    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:35.366382    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:35.366393    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:35.378849    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:35.378859    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:35.402992    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:35.402999    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:35.413887    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:35.413899    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:35.448749    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:35.448760    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:35.462584    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:35.462593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:35.473738    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:35.473749    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:37.990930    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:42.993201    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:42.993307    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:43.004894    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:43.004967    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:43.015293    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:43.015363    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:43.025833    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:43.025894    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:43.036152    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:43.036212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:43.046474    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:43.046532    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:43.057347    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:43.057417    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:43.067875    6914 logs.go:276] 0 containers: []
	W0624 03:37:43.067886    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:43.067938    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:43.078499    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:43.078515    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:43.078520    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:43.089693    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:43.089703    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:43.101335    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:43.101345    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:43.112577    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:43.112591    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:43.136909    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:43.136916    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:43.172293    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:43.172301    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:43.187370    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:43.187381    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:43.201500    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:43.201511    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:43.213052    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:43.213063    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:43.224575    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:43.224587    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:43.229205    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:43.229213    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:43.265218    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:43.265229    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:43.284138    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:43.284150    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:45.804229    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:50.806856    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:50.807050    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:50.826805    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:50.826900    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:50.843598    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:50.843680    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:50.855774    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:50.855840    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:50.866442    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:50.866505    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:50.877214    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:50.877293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:50.891479    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:50.891550    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:50.903711    6914 logs.go:276] 0 containers: []
	W0624 03:37:50.903722    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:50.903777    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:50.914066    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:50.914084    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:50.914090    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:50.918439    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:50.918446    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:50.952713    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:50.952723    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:37:50.967221    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:50.967232    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:50.982582    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:50.982593    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:50.994582    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:50.994594    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:51.011344    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:51.011354    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:51.035921    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:51.035928    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:51.070869    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:51.070877    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:51.084709    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:51.084719    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:51.097536    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:51.097547    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:51.109793    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:51.109805    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:51.126337    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:51.126348    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:53.640338    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:37:58.642192    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:37:58.642327    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:37:58.655851    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:37:58.655920    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:37:58.666579    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:37:58.666639    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:37:58.676859    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:37:58.676933    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:37:58.687639    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:37:58.687705    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:37:58.698054    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:37:58.698126    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:37:58.708544    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:37:58.708610    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:37:58.719517    6914 logs.go:276] 0 containers: []
	W0624 03:37:58.719528    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:37:58.719583    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:37:58.730323    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:37:58.730337    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:37:58.730342    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:37:58.742044    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:37:58.742054    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:37:58.753538    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:37:58.753548    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:37:58.770588    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:37:58.770599    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:37:58.784112    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:37:58.784122    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:37:58.818958    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:37:58.818965    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:37:58.823031    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:37:58.823037    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:37:58.836650    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:37:58.836661    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:37:58.858978    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:37:58.858989    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:37:58.875495    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:37:58.875506    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:37:58.899926    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:37:58.899940    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:37:58.912514    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:37:58.912525    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:37:58.949508    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:37:58.949521    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:01.466115    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:06.468293    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:06.468405    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:06.482962    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:06.483040    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:06.494896    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:06.494962    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:06.505842    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:06.505905    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:06.516916    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:06.516982    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:06.527584    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:06.527651    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:06.537921    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:06.537983    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:06.548701    6914 logs.go:276] 0 containers: []
	W0624 03:38:06.548712    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:06.548759    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:06.559415    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:06.559429    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:06.559434    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:06.571503    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:06.571518    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:06.585753    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:06.585763    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:06.589897    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:06.589903    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:06.624912    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:06.624921    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:06.639663    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:06.639674    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:06.653532    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:06.653545    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:06.664727    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:06.664740    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:06.678968    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:06.678980    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:06.691607    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:06.691617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:06.725052    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:06.725066    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:06.740922    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:06.740939    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:06.765936    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:06.765951    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:09.284760    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:14.287054    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:14.287221    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:14.301200    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:14.301270    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:14.314727    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:14.314796    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:14.325309    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:14.325375    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:14.336072    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:14.336137    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:14.346893    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:14.346964    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:14.356959    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:14.357020    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:14.366966    6914 logs.go:276] 0 containers: []
	W0624 03:38:14.366980    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:14.367036    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:14.376960    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:14.376974    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:14.376979    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:14.411582    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:14.411590    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:14.425562    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:14.425572    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:14.437895    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:14.437907    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:14.449451    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:14.449462    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:14.460727    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:14.460738    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:14.484433    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:14.484441    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:14.489165    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:14.489172    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:14.522794    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:14.522809    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:14.537233    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:14.537243    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:14.548687    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:14.548697    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:14.567493    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:14.567504    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:14.584606    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:14.584617    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:17.098202    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:22.100462    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:22.100683    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:22.123876    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:22.123986    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:22.140778    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:22.140851    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:22.153615    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:22.153690    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:22.165023    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:22.165086    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:22.175422    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:22.175488    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:22.186023    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:22.186090    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:22.199532    6914 logs.go:276] 0 containers: []
	W0624 03:38:22.199545    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:22.199613    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:22.211191    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:22.211204    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:22.211210    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:22.223301    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:22.223313    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:22.257237    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:22.257249    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:22.261659    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:22.261665    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:22.277297    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:22.277308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:22.289757    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:22.289766    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:22.304246    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:22.304256    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:22.324168    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:22.324180    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:22.335757    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:22.335766    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:22.373417    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:22.373428    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:22.392510    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:22.392519    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:22.404115    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:22.404125    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:22.415998    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:22.416008    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:24.943402    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:29.945695    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:29.946225    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:29.978398    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:29.978533    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:29.996551    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:29.996654    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:30.013628    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:30.013695    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:30.029559    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:30.029636    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:30.040031    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:30.040092    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:30.054621    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:30.054686    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:30.064683    6914 logs.go:276] 0 containers: []
	W0624 03:38:30.064698    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:30.064759    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:30.075919    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:30.075938    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:30.075943    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:30.088411    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:30.088425    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:30.105821    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:30.105831    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:30.130762    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:30.130771    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:30.135067    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:30.135074    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:30.154480    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:30.154493    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:30.168378    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:30.168392    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:30.186043    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:30.186056    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:30.197730    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:30.197745    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:30.209148    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:30.209163    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:30.242631    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:30.242638    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:30.285395    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:30.285406    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:30.297567    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:30.297581    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:32.810940    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:37.812425    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:37.812634    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:37.832983    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:37.833075    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:37.846959    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:37.847021    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:37.859444    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:37.859512    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:37.870437    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:37.870501    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:37.882213    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:37.882278    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:37.893658    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:37.893721    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:37.903751    6914 logs.go:276] 0 containers: []
	W0624 03:38:37.903762    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:37.903819    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:37.913987    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:37.914002    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:37.914007    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:37.931280    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:37.931289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:37.943017    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:37.943027    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:37.976942    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:37.976950    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:37.981340    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:37.981347    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:38.015131    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:38.015141    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:38.029942    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:38.029953    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:38.044245    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:38.044254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:38.056001    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:38.056012    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:38.081254    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:38.081262    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:38.093184    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:38.093196    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:38.108791    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:38.108807    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:38.120690    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:38.120700    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:40.634988    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:45.637173    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:45.637301    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:45.648872    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:45.648944    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:45.659520    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:45.659593    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:45.670423    6914 logs.go:276] 2 containers: [802c943d5cef db7f249020db]
	I0624 03:38:45.670486    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:45.680976    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:45.681033    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:45.691126    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:45.691183    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:45.701227    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:45.701294    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:45.711012    6914 logs.go:276] 0 containers: []
	W0624 03:38:45.711023    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:45.711077    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:45.721418    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:45.721434    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:45.721439    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:45.735179    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:45.735189    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:45.747137    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:45.747148    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:45.751658    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:45.751665    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:45.785794    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:45.785805    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:45.804443    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:45.804453    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:45.818858    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:45.818867    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:45.830866    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:45.830876    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:45.848864    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:45.848878    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:45.882298    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:45.882308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:45.896309    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:45.896320    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:45.912208    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:45.912218    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:45.923775    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:45.923787    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:48.450814    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:38:53.452962    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:38:53.453141    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:38:53.476222    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:38:53.476334    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:38:53.491983    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:38:53.492056    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:38:53.505051    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:38:53.505127    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:38:53.515580    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:38:53.515644    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:38:53.527521    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:38:53.527582    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:38:53.537661    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:38:53.537730    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:38:53.547973    6914 logs.go:276] 0 containers: []
	W0624 03:38:53.547983    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:38:53.548031    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:38:53.558139    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:38:53.558155    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:38:53.558161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:38:53.574436    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:38:53.574447    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:38:53.598999    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:38:53.599010    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:38:53.603447    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:38:53.603455    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:38:53.615136    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:38:53.615148    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:38:53.631435    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:38:53.631446    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:38:53.646792    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:38:53.646802    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:38:53.681509    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:38:53.681517    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:38:53.693033    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:38:53.693043    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:38:53.729356    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:38:53.729368    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:38:53.743678    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:38:53.743688    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:38:53.755122    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:38:53.755132    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:38:53.772099    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:38:53.772108    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:38:53.783346    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:38:53.783356    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:38:53.794998    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:38:53.795009    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:38:56.310880    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:01.313180    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:01.313294    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:01.326622    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:01.326690    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:01.337439    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:01.337505    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:01.351214    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:01.351284    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:01.363248    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:01.363320    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:01.375760    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:01.375821    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:01.386527    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:01.386593    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:01.396886    6914 logs.go:276] 0 containers: []
	W0624 03:39:01.396902    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:01.396952    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:01.414704    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:01.414720    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:01.414725    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:01.419407    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:01.419414    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:01.431679    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:01.431689    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:01.443299    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:01.443308    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:01.454800    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:01.454810    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:01.490673    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:01.490689    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:01.502041    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:01.502051    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:01.513597    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:01.513607    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:01.527768    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:01.527778    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:01.539517    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:01.539527    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:01.563483    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:01.563497    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:01.598006    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:01.598016    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:01.613265    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:01.613274    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:01.640113    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:01.640123    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:01.651921    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:01.651931    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:04.168172    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:09.170430    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:09.170556    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:09.182424    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:09.182500    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:09.193168    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:09.193277    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:09.203696    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:09.203753    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:09.213648    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:09.213712    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:09.224279    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:09.224333    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:09.234640    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:09.234703    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:09.244440    6914 logs.go:276] 0 containers: []
	W0624 03:39:09.244456    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:09.244507    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:09.255166    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:09.255180    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:09.255185    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:09.288957    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:09.288970    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:09.312073    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:09.312086    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:09.323200    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:09.323213    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:09.337638    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:09.337650    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:09.348407    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:09.348416    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:09.382465    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:09.382473    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:09.396777    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:09.396785    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:09.407892    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:09.407900    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:09.419682    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:09.419690    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:09.431693    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:09.431707    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:09.443813    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:09.443822    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:09.469617    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:09.469626    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:09.473791    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:09.473796    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:09.485545    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:09.485555    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:12.011975    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:17.014536    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:17.014663    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:17.026539    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:17.026617    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:17.037113    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:17.037176    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:17.048560    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:17.048640    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:17.059301    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:17.059372    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:17.070519    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:17.070584    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:17.081737    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:17.081804    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:17.092109    6914 logs.go:276] 0 containers: []
	W0624 03:39:17.092122    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:17.092180    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:17.102642    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:17.102659    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:17.102665    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:17.107032    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:17.107039    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:17.118294    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:17.118304    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:17.132045    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:17.132055    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:17.143649    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:17.143663    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:17.155191    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:17.155202    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:17.188324    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:17.188331    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:17.202741    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:17.202755    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:17.213848    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:17.213858    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:17.225557    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:17.225570    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:17.250353    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:17.250362    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:17.262794    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:17.262805    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:17.299264    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:17.299279    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:17.319915    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:17.319928    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:17.337664    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:17.337678    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:19.851343    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:24.853883    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:24.854067    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:24.869263    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:24.869342    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:24.882112    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:24.882189    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:24.895065    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:24.895135    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:24.906688    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:24.906753    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:24.917220    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:24.917293    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:24.927955    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:24.928018    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:24.941407    6914 logs.go:276] 0 containers: []
	W0624 03:39:24.941420    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:24.941474    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:24.951681    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:24.951697    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:24.951704    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:24.965831    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:24.965846    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:24.992000    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:24.992008    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:25.009115    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:25.009128    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:25.021070    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:25.021080    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:25.032557    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:25.032567    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:25.045086    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:25.045099    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:25.058110    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:25.058120    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:25.076360    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:25.076371    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:25.088864    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:25.088873    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:25.122914    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:25.122928    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:25.128473    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:25.128483    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:25.163190    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:25.163207    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:25.174645    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:25.174656    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:25.189305    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:25.189319    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:27.703960    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:32.705516    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:32.705618    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:32.716572    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:32.716630    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:32.726953    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:32.727025    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:32.738046    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:32.738113    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:32.749147    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:32.749211    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:32.759314    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:32.759372    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:32.769476    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:32.769543    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:32.779874    6914 logs.go:276] 0 containers: []
	W0624 03:39:32.779886    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:32.779939    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:32.790253    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:32.790268    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:32.790273    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:32.825036    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:32.825044    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:32.836694    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:32.836706    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:32.856847    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:32.856856    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:32.868305    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:32.868317    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:32.872582    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:32.872590    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:32.907674    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:32.907684    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:32.923169    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:32.923178    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:32.934951    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:32.934962    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:32.956150    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:32.956158    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:32.969854    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:32.969862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:32.981218    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:32.981227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:32.993243    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:32.993254    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:33.007926    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:33.007936    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:33.019419    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:33.019429    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:35.545150    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:40.547694    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:40.548019    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:40.584604    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:40.584733    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:40.610951    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:40.611027    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:40.623505    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:40.623599    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:40.637839    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:40.637905    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:40.648815    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:40.648879    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:40.660118    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:40.660190    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:40.673709    6914 logs.go:276] 0 containers: []
	W0624 03:39:40.673720    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:40.673776    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:40.689165    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:40.689182    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:40.689187    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:40.704151    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:40.704161    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:40.717747    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:40.717759    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:40.730351    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:40.730363    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:40.765435    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:40.765445    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:40.779815    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:40.779825    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:40.791184    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:40.791193    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:40.803517    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:40.803531    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:40.818009    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:40.818018    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:40.843294    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:40.843302    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:40.861145    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:40.861155    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:40.873729    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:40.873740    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:40.909657    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:40.909672    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:40.914516    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:40.914523    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:40.926618    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:40.926629    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:43.440853    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:48.443062    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:48.443166    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:48.454169    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:48.454248    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:48.464823    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:48.464890    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:48.475896    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:48.475968    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:48.492685    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:48.492756    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:48.503192    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:48.503262    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:48.514041    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:48.514109    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:48.524483    6914 logs.go:276] 0 containers: []
	W0624 03:39:48.524494    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:48.524549    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:48.535327    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:48.535344    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:48.535349    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:48.548104    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:48.548114    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:48.552339    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:48.552345    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:48.566678    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:48.566688    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:48.578694    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:48.578704    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:48.590476    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:48.590489    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:48.602357    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:48.602368    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:48.614348    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:48.614360    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:48.649038    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:48.649059    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:48.725689    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:48.725705    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:48.740499    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:48.740512    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:48.756144    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:48.756158    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:48.773212    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:48.773225    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:48.798193    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:48.798201    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:48.816672    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:48.816687    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:51.330693    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:39:56.332956    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:39:56.333179    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:39:56.359168    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:39:56.359284    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:39:56.377286    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:39:56.377361    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:39:56.391241    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:39:56.391315    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:39:56.402863    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:39:56.402928    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:39:56.413242    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:39:56.413310    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:39:56.424528    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:39:56.424599    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:39:56.439132    6914 logs.go:276] 0 containers: []
	W0624 03:39:56.439144    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:39:56.439199    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:39:56.449185    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:39:56.449202    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:39:56.449208    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:39:56.460861    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:39:56.460871    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:39:56.486428    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:39:56.486436    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:39:56.521631    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:39:56.521642    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:39:56.536624    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:39:56.536634    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:39:56.548097    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:39:56.548108    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:39:56.559648    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:39:56.559657    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:39:56.573031    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:39:56.573042    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:39:56.590545    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:39:56.590556    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:39:56.601869    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:39:56.601881    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:39:56.635900    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:39:56.635910    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:39:56.640156    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:39:56.640165    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:39:56.654660    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:39:56.654671    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:39:56.673333    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:39:56.673344    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:39:56.690144    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:39:56.690154    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:39:59.207973    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:04.210182    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:04.210339    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:04.223887    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:04.223963    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:04.234860    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:04.234925    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:04.245455    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:04.245526    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:04.255672    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:04.255731    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:04.266063    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:04.266125    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:04.276772    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:04.276835    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:04.287859    6914 logs.go:276] 0 containers: []
	W0624 03:40:04.287871    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:04.287928    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:04.297962    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:04.297980    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:04.297985    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:04.309545    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:04.309554    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:04.333894    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:04.333902    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:04.368174    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:04.368184    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:04.383247    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:04.383257    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:04.400398    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:04.400408    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:04.419242    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:04.419254    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:04.432538    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:04.432551    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:04.465654    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:04.465662    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:04.480224    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:04.480233    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:04.495498    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:04.495512    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:04.509766    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:04.509779    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:04.521497    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:04.521506    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:04.532948    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:04.532962    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:04.537668    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:04.537677    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:07.050710    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:12.053119    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:12.053402    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:12.103253    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:12.103371    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:12.120194    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:12.120276    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:12.133413    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:12.133493    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:12.144958    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:12.145025    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:12.155820    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:12.155885    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:12.171613    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:12.171686    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:12.182143    6914 logs.go:276] 0 containers: []
	W0624 03:40:12.182155    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:12.182212    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:12.193549    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:12.193565    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:12.193570    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:12.205950    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:12.205964    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:12.217864    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:12.217873    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:12.240930    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:12.240937    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:12.255417    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:12.255431    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:12.267486    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:12.267498    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:12.280136    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:12.280169    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:12.315687    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:12.315694    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:12.331980    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:12.331995    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:12.343818    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:12.343834    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:12.355994    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:12.356003    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:12.391595    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:12.391606    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:12.406496    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:12.406506    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:12.418321    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:12.418332    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:12.436389    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:12.436402    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:14.943431    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:19.945713    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:19.945920    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:19.964107    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:19.964199    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:19.977220    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:19.977291    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:19.989084    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:19.989152    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:19.999890    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:19.999962    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:20.010413    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:20.010481    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:20.020350    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:20.020413    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:20.030477    6914 logs.go:276] 0 containers: []
	W0624 03:40:20.030490    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:20.030548    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:20.041048    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:20.041068    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:20.041073    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:20.053169    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:20.053179    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:20.070858    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:20.070868    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:20.094241    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:20.094252    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:20.127146    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:20.127153    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:20.132226    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:20.132237    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:20.143706    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:20.143715    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:20.155871    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:20.155882    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:20.170658    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:20.170669    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:20.206356    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:20.206366    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:20.225757    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:20.225766    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:20.239809    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:20.239818    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:20.251216    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:20.251227    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:20.262137    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:20.262147    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:20.273942    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:20.273952    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:22.788159    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:27.790515    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:27.790683    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 03:40:27.803678    6914 logs.go:276] 1 containers: [0813748152d9]
	I0624 03:40:27.803746    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 03:40:27.814660    6914 logs.go:276] 1 containers: [8e7f51e3a34a]
	I0624 03:40:27.814726    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 03:40:27.825443    6914 logs.go:276] 4 containers: [a398a7123448 0fa211015fd9 802c943d5cef db7f249020db]
	I0624 03:40:27.825515    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 03:40:27.839288    6914 logs.go:276] 1 containers: [562965f8c59e]
	I0624 03:40:27.839356    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 03:40:27.849173    6914 logs.go:276] 1 containers: [393dee82c4f9]
	I0624 03:40:27.849253    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 03:40:27.859306    6914 logs.go:276] 1 containers: [098261543e5b]
	I0624 03:40:27.859369    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 03:40:27.869788    6914 logs.go:276] 0 containers: []
	W0624 03:40:27.869802    6914 logs.go:278] No container was found matching "kindnet"
	I0624 03:40:27.869859    6914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0624 03:40:27.880541    6914 logs.go:276] 1 containers: [25239e7f92f1]
	I0624 03:40:27.880574    6914 logs.go:123] Gathering logs for dmesg ...
	I0624 03:40:27.880580    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 03:40:27.886233    6914 logs.go:123] Gathering logs for kube-proxy [393dee82c4f9] ...
	I0624 03:40:27.886241    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 393dee82c4f9"
	I0624 03:40:27.900163    6914 logs.go:123] Gathering logs for storage-provisioner [25239e7f92f1] ...
	I0624 03:40:27.900176    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25239e7f92f1"
	I0624 03:40:27.911912    6914 logs.go:123] Gathering logs for Docker ...
	I0624 03:40:27.911926    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 03:40:27.935076    6914 logs.go:123] Gathering logs for container status ...
	I0624 03:40:27.935084    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 03:40:27.947371    6914 logs.go:123] Gathering logs for kube-apiserver [0813748152d9] ...
	I0624 03:40:27.947384    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0813748152d9"
	I0624 03:40:27.962937    6914 logs.go:123] Gathering logs for coredns [0fa211015fd9] ...
	I0624 03:40:27.962948    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fa211015fd9"
	I0624 03:40:27.975480    6914 logs.go:123] Gathering logs for coredns [802c943d5cef] ...
	I0624 03:40:27.975493    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 802c943d5cef"
	I0624 03:40:27.987276    6914 logs.go:123] Gathering logs for coredns [db7f249020db] ...
	I0624 03:40:27.987289    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f249020db"
	I0624 03:40:27.999200    6914 logs.go:123] Gathering logs for kube-scheduler [562965f8c59e] ...
	I0624 03:40:27.999209    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 562965f8c59e"
	I0624 03:40:28.013853    6914 logs.go:123] Gathering logs for kube-controller-manager [098261543e5b] ...
	I0624 03:40:28.013862    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098261543e5b"
	I0624 03:40:28.031369    6914 logs.go:123] Gathering logs for coredns [a398a7123448] ...
	I0624 03:40:28.031382    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a398a7123448"
	I0624 03:40:28.042173    6914 logs.go:123] Gathering logs for kubelet ...
	I0624 03:40:28.042187    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 03:40:28.077490    6914 logs.go:123] Gathering logs for describe nodes ...
	I0624 03:40:28.077501    6914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 03:40:28.111890    6914 logs.go:123] Gathering logs for etcd [8e7f51e3a34a] ...
	I0624 03:40:28.111905    6914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7f51e3a34a"
	I0624 03:40:30.627932    6914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0624 03:40:35.630331    6914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0624 03:40:35.633640    6914 out.go:177] 
	W0624 03:40:35.637629    6914 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0624 03:40:35.637635    6914 out.go:239] * 
	* 
	W0624 03:40:35.638128    6914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:40:35.649635    6914 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-252000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 : exit status 80 (5.265929292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-996000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-996000
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-996000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-996000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-996000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-996000 -n NoKubernetes-996000: exit status 7 (69.842417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-996000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestPause/serial/Start (9.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-123000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-123000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.890120542s)

                                                
                                                
-- stdout --
	* [pause-123000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-123000" primary control-plane node in "pause-123000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-123000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-123000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-123000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-123000 -n pause-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-123000 -n pause-123000: exit status 7 (57.893083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-123000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19124
- KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3320976900/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19124
- KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2694162567/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.816802375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-703000" primary control-plane node in "old-k8s-version-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:02.030516    7409 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:02.030709    7409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:02.030715    7409 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:02.030717    7409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:02.030851    7409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:02.031877    7409 out.go:298] Setting JSON to false
	I0624 03:42:02.048028    7409 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6092,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:02.048096    7409 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:02.053979    7409 out.go:177] * [old-k8s-version-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:02.062821    7409 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:02.062862    7409 notify.go:220] Checking for updates...
	I0624 03:42:02.070798    7409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:02.073830    7409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:02.077757    7409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:02.080818    7409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:02.083764    7409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:02.087144    7409 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:02.087210    7409 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:02.087258    7409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:02.090804    7409 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:42:02.097791    7409 start.go:297] selected driver: qemu2
	I0624 03:42:02.097799    7409 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:42:02.097806    7409 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:02.100287    7409 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:42:02.103820    7409 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:42:02.107667    7409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:02.107688    7409 cni.go:84] Creating CNI manager for ""
	I0624 03:42:02.107693    7409 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:42:02.107722    7409 start.go:340] cluster config:
	{Name:old-k8s-version-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:02.112167    7409 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:02.120814    7409 out.go:177] * Starting "old-k8s-version-703000" primary control-plane node in "old-k8s-version-703000" cluster
	I0624 03:42:02.124811    7409 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:42:02.124828    7409 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:42:02.124835    7409 cache.go:56] Caching tarball of preloaded images
	I0624 03:42:02.124904    7409 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:42:02.124910    7409 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:42:02.125007    7409 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/old-k8s-version-703000/config.json ...
	I0624 03:42:02.125019    7409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/old-k8s-version-703000/config.json: {Name:mke6e8f0b5a6e2053c24d49bce6bcd103cdd1cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:42:02.125239    7409 start.go:360] acquireMachinesLock for old-k8s-version-703000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:02.125278    7409 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "old-k8s-version-703000"
	I0624 03:42:02.125290    7409 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:02.125320    7409 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:02.129801    7409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:02.147778    7409 start.go:159] libmachine.API.Create for "old-k8s-version-703000" (driver="qemu2")
	I0624 03:42:02.147803    7409 client.go:168] LocalClient.Create starting
	I0624 03:42:02.147869    7409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:02.147898    7409 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:02.147909    7409 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:02.147944    7409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:02.147967    7409 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:02.147975    7409 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:02.148415    7409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:02.291638    7409 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:02.363370    7409 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:02.363376    7409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:02.363591    7409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:02.375845    7409 main.go:141] libmachine: STDOUT: 
	I0624 03:42:02.375866    7409 main.go:141] libmachine: STDERR: 
	I0624 03:42:02.375928    7409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2 +20000M
	I0624 03:42:02.386938    7409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:02.386953    7409 main.go:141] libmachine: STDERR: 
	I0624 03:42:02.386967    7409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:02.386972    7409 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:02.387000    7409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:a9:be:b3:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:02.388673    7409 main.go:141] libmachine: STDOUT: 
	I0624 03:42:02.388691    7409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:02.388708    7409 client.go:171] duration metric: took 240.9005ms to LocalClient.Create
	I0624 03:42:04.391081    7409 start.go:128] duration metric: took 2.265722s to createHost
	I0624 03:42:04.391181    7409 start.go:83] releasing machines lock for "old-k8s-version-703000", held for 2.265912708s
	W0624 03:42:04.391237    7409 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:04.408392    7409 out.go:177] * Deleting "old-k8s-version-703000" in qemu2 ...
	W0624 03:42:04.438602    7409 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:04.438634    7409 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:09.439375    7409 start.go:360] acquireMachinesLock for old-k8s-version-703000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:09.439812    7409 start.go:364] duration metric: took 350.292µs to acquireMachinesLock for "old-k8s-version-703000"
	I0624 03:42:09.439942    7409 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:09.440202    7409 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:09.456232    7409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:09.500481    7409 start.go:159] libmachine.API.Create for "old-k8s-version-703000" (driver="qemu2")
	I0624 03:42:09.500543    7409 client.go:168] LocalClient.Create starting
	I0624 03:42:09.500702    7409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:09.500773    7409 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:09.500791    7409 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:09.500934    7409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:09.500985    7409 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:09.500999    7409 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:09.501844    7409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:09.659285    7409 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:09.741243    7409 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:09.741248    7409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:09.741455    7409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:09.754020    7409 main.go:141] libmachine: STDOUT: 
	I0624 03:42:09.754038    7409 main.go:141] libmachine: STDERR: 
	I0624 03:42:09.754089    7409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2 +20000M
	I0624 03:42:09.764972    7409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:09.764990    7409 main.go:141] libmachine: STDERR: 
	I0624 03:42:09.765002    7409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:09.765008    7409 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:09.765056    7409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8c:36:44:a1:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:09.766820    7409 main.go:141] libmachine: STDOUT: 
	I0624 03:42:09.766833    7409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:09.766846    7409 client.go:171] duration metric: took 266.300458ms to LocalClient.Create
	I0624 03:42:11.769008    7409 start.go:128] duration metric: took 2.328794542s to createHost
	I0624 03:42:11.769065    7409 start.go:83] releasing machines lock for "old-k8s-version-703000", held for 2.32924625s
	W0624 03:42:11.769467    7409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:11.784973    7409 out.go:177] 
	W0624 03:42:11.790152    7409 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:11.790179    7409 out.go:239] * 
	* 
	W0624 03:42:11.793066    7409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:11.804036    7409 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (67.656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-703000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-703000 create -f testdata/busybox.yaml: exit status 1 (29.211583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-703000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-703000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.596167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.789959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-703000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-703000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-703000 describe deploy/metrics-server -n kube-system: exit status 1 (26.998666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-703000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-703000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.526166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193414583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-703000" primary control-plane node in "old-k8s-version-703000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:15.333325    7463 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:15.333697    7463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:15.333701    7463 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:15.333704    7463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:15.333925    7463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:15.335239    7463 out.go:298] Setting JSON to false
	I0624 03:42:15.351425    7463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6105,"bootTime":1719219630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:15.351483    7463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:15.355311    7463 out.go:177] * [old-k8s-version-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:15.363202    7463 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:15.363258    7463 notify.go:220] Checking for updates...
	I0624 03:42:15.370205    7463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:15.373178    7463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:15.376269    7463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:15.379249    7463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:15.382280    7463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:15.385443    7463 config.go:182] Loaded profile config "old-k8s-version-703000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0624 03:42:15.389266    7463 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0624 03:42:15.392220    7463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:15.395220    7463 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:42:15.401161    7463 start.go:297] selected driver: qemu2
	I0624 03:42:15.401168    7463 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:15.401231    7463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:15.403644    7463 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:15.403689    7463 cni.go:84] Creating CNI manager for ""
	I0624 03:42:15.403697    7463 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:42:15.403716    7463 start.go:340] cluster config:
	{Name:old-k8s-version-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-703000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:15.408207    7463 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:15.420109    7463 out.go:177] * Starting "old-k8s-version-703000" primary control-plane node in "old-k8s-version-703000" cluster
	I0624 03:42:15.424263    7463 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:42:15.424281    7463 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:42:15.424290    7463 cache.go:56] Caching tarball of preloaded images
	I0624 03:42:15.424358    7463 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:42:15.424380    7463 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:42:15.424448    7463 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/old-k8s-version-703000/config.json ...
	I0624 03:42:15.424909    7463 start.go:360] acquireMachinesLock for old-k8s-version-703000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:15.424940    7463 start.go:364] duration metric: took 23.042µs to acquireMachinesLock for "old-k8s-version-703000"
	I0624 03:42:15.424949    7463 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:15.424957    7463 fix.go:54] fixHost starting: 
	I0624 03:42:15.425075    7463 fix.go:112] recreateIfNeeded on old-k8s-version-703000: state=Stopped err=<nil>
	W0624 03:42:15.425084    7463 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:15.429173    7463 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-703000" ...
	I0624 03:42:15.436296    7463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8c:36:44:a1:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:15.438467    7463 main.go:141] libmachine: STDOUT: 
	I0624 03:42:15.438484    7463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:15.438514    7463 fix.go:56] duration metric: took 13.558917ms for fixHost
	I0624 03:42:15.438520    7463 start.go:83] releasing machines lock for "old-k8s-version-703000", held for 13.575ms
	W0624 03:42:15.438525    7463 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:15.438558    7463 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:15.438563    7463 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:20.440636    7463 start.go:360] acquireMachinesLock for old-k8s-version-703000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:20.441046    7463 start.go:364] duration metric: took 309.792µs to acquireMachinesLock for "old-k8s-version-703000"
	I0624 03:42:20.441191    7463 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:20.441210    7463 fix.go:54] fixHost starting: 
	I0624 03:42:20.441995    7463 fix.go:112] recreateIfNeeded on old-k8s-version-703000: state=Stopped err=<nil>
	W0624 03:42:20.442021    7463 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:20.451296    7463 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-703000" ...
	I0624 03:42:20.455494    7463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8c:36:44:a1:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/old-k8s-version-703000/disk.qcow2
	I0624 03:42:20.464528    7463 main.go:141] libmachine: STDOUT: 
	I0624 03:42:20.464596    7463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:20.464709    7463 fix.go:56] duration metric: took 23.494416ms for fixHost
	I0624 03:42:20.464735    7463 start.go:83] releasing machines lock for "old-k8s-version-703000", held for 23.6615ms
	W0624 03:42:20.464980    7463 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:20.470560    7463 out.go:177] 
	W0624 03:42:20.474311    7463 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:20.474333    7463 out.go:239] * 
	* 
	W0624 03:42:20.476826    7463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:20.485311    7463 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-703000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (68.88025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-703000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (33.086334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-703000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-703000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-703000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.3675ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-703000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-703000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.021208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-703000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.678708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-703000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-703000 --alsologtostderr -v=1: exit status 83 (41.740167ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-703000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-703000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:20.756756    7482 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:20.757166    7482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:20.757172    7482 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:20.757174    7482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:20.757353    7482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:20.757581    7482 out.go:298] Setting JSON to false
	I0624 03:42:20.757587    7482 mustload.go:65] Loading cluster: old-k8s-version-703000
	I0624 03:42:20.757784    7482 config.go:182] Loaded profile config "old-k8s-version-703000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0624 03:42:20.762757    7482 out.go:177] * The control-plane node old-k8s-version-703000 host is not running: state=Stopped
	I0624 03:42:20.765762    7482 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-703000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-703000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.302375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (30.457708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-703000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.98631725s)

                                                
                                                
-- stdout --
	* [no-preload-030000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-030000" primary control-plane node in "no-preload-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:21.218448    7505 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:21.218584    7505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:21.218587    7505 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:21.218590    7505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:21.218714    7505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:21.219788    7505 out.go:298] Setting JSON to false
	I0624 03:42:21.235708    7505 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6111,"bootTime":1719219630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:21.235764    7505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:21.239467    7505 out.go:177] * [no-preload-030000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:21.245247    7505 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:21.245318    7505 notify.go:220] Checking for updates...
	I0624 03:42:21.249221    7505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:21.252222    7505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:21.255253    7505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:21.258188    7505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:21.261258    7505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:21.264525    7505 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:21.264581    7505 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:21.264636    7505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:21.269114    7505 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:42:21.276291    7505 start.go:297] selected driver: qemu2
	I0624 03:42:21.276299    7505 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:42:21.276306    7505 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:21.278528    7505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:42:21.281134    7505 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:42:21.284278    7505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:21.284297    7505 cni.go:84] Creating CNI manager for ""
	I0624 03:42:21.284304    7505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:42:21.284308    7505 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:42:21.284334    7505 start.go:340] cluster config:
	{Name:no-preload-030000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:21.288798    7505 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.294247    7505 out.go:177] * Starting "no-preload-030000" primary control-plane node in "no-preload-030000" cluster
	I0624 03:42:21.298227    7505 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:42:21.298308    7505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/no-preload-030000/config.json ...
	I0624 03:42:21.298325    7505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/no-preload-030000/config.json: {Name:mk95f436df346d563b9dfc030da421f169a51641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:42:21.298339    7505 cache.go:107] acquiring lock: {Name:mked59fb8aa75320154cc5604c97a69c9d3437cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298348    7505 cache.go:107] acquiring lock: {Name:mk51f18764a008898c34dee6298b6220315184df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298359    7505 cache.go:107] acquiring lock: {Name:mkf8624c530e1bef5d15b839b2cd060a259387cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298400    7505 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0624 03:42:21.298409    7505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.833µs
	I0624 03:42:21.298416    7505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0624 03:42:21.298434    7505 cache.go:107] acquiring lock: {Name:mkdd95741bd18a4894e3df918851f9bb89c5ba0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298340    7505 cache.go:107] acquiring lock: {Name:mk6cf6a6905df2adb42543bb2647437945550b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298473    7505 cache.go:107] acquiring lock: {Name:mk2591e426be0a7c3c0353cc5a77244e3fa7c2ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298512    7505 cache.go:107] acquiring lock: {Name:mk339fc1506bbaaf19f9fe19e8b2bd5bf5e38533 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298492    7505 cache.go:107] acquiring lock: {Name:mk5d75032ddd50cc8aa4ef3f3329b1df091d3214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:21.298522    7505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 03:42:21.298581    7505 start.go:360] acquireMachinesLock for no-preload-030000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:21.298625    7505 start.go:364] duration metric: took 36.375µs to acquireMachinesLock for "no-preload-030000"
	I0624 03:42:21.298637    7505 start.go:93] Provisioning new machine with config: &{Name:no-preload-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:21.298671    7505 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:21.298691    7505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0624 03:42:21.298729    7505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0624 03:42:21.298705    7505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0624 03:42:21.298671    7505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0624 03:42:21.298751    7505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0624 03:42:21.299105    7505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0624 03:42:21.306147    7505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:21.306943    7505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 03:42:21.311434    7505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0624 03:42:21.311560    7505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0624 03:42:21.311782    7505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0624 03:42:21.322740    7505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0624 03:42:21.322773    7505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0624 03:42:21.322847    7505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0624 03:42:21.324055    7505 start.go:159] libmachine.API.Create for "no-preload-030000" (driver="qemu2")
	I0624 03:42:21.324076    7505 client.go:168] LocalClient.Create starting
	I0624 03:42:21.324182    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:21.324215    7505 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:21.324233    7505 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:21.324277    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:21.324300    7505 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:21.324320    7505 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:21.324725    7505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:21.469364    7505 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:21.646159    7505 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:21.646175    7505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:21.646426    7505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:21.659064    7505 main.go:141] libmachine: STDOUT: 
	I0624 03:42:21.659084    7505 main.go:141] libmachine: STDERR: 
	I0624 03:42:21.659143    7505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2 +20000M
	I0624 03:42:21.670463    7505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:21.670477    7505 main.go:141] libmachine: STDERR: 
	I0624 03:42:21.670499    7505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:21.670504    7505 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:21.670544    7505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:8b:0d:c7:39:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:21.672268    7505 main.go:141] libmachine: STDOUT: 
	I0624 03:42:21.672281    7505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:21.672309    7505 client.go:171] duration metric: took 348.224791ms to LocalClient.Create
	I0624 03:42:22.188677    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0624 03:42:22.208178    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0624 03:42:22.217992    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0624 03:42:22.237164    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2
	I0624 03:42:22.320675    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0624 03:42:22.320718    7505 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.022360791s
	I0624 03:42:22.320752    7505 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0624 03:42:22.358578    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2
	I0624 03:42:22.365379    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2
	I0624 03:42:22.399753    7505 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0624 03:42:23.672635    7505 start.go:128] duration metric: took 2.373959708s to createHost
	I0624 03:42:23.672675    7505 start.go:83] releasing machines lock for "no-preload-030000", held for 2.374061s
	W0624 03:42:23.672747    7505 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:23.688697    7505 out.go:177] * Deleting "no-preload-030000" in qemu2 ...
	W0624 03:42:23.718248    7505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:23.718273    7505 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:25.232974    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0624 03:42:25.233025    7505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 3.934601292s
	I0624 03:42:25.233107    7505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0624 03:42:25.246478    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0624 03:42:25.246516    7505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 3.948206458s
	I0624 03:42:25.246539    7505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0624 03:42:25.316424    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0624 03:42:25.316467    7505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.018040792s
	I0624 03:42:25.316488    7505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0624 03:42:26.108683    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0624 03:42:26.108728    7505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 4.810437s
	I0624 03:42:26.108755    7505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0624 03:42:26.693717    7505 cache.go:157] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0624 03:42:26.693744    7505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 5.395277875s
	I0624 03:42:26.693755    7505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0624 03:42:28.718503    7505 start.go:360] acquireMachinesLock for no-preload-030000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:28.718910    7505 start.go:364] duration metric: took 345.5µs to acquireMachinesLock for "no-preload-030000"
	I0624 03:42:28.719060    7505 start.go:93] Provisioning new machine with config: &{Name:no-preload-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:28.719324    7505 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:28.729706    7505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:28.781312    7505 start.go:159] libmachine.API.Create for "no-preload-030000" (driver="qemu2")
	I0624 03:42:28.781346    7505 client.go:168] LocalClient.Create starting
	I0624 03:42:28.781456    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:28.781541    7505 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:28.781568    7505 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:28.781640    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:28.781683    7505 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:28.781699    7505 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:28.782199    7505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:28.938715    7505 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:29.096525    7505 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:29.096532    7505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:29.096766    7505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:29.109865    7505 main.go:141] libmachine: STDOUT: 
	I0624 03:42:29.109904    7505 main.go:141] libmachine: STDERR: 
	I0624 03:42:29.109974    7505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2 +20000M
	I0624 03:42:29.121151    7505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:29.121166    7505 main.go:141] libmachine: STDERR: 
	I0624 03:42:29.121186    7505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:29.121191    7505 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:29.121233    7505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:a6:ed:98:2a:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:29.123002    7505 main.go:141] libmachine: STDOUT: 
	I0624 03:42:29.123018    7505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:29.123032    7505 client.go:171] duration metric: took 341.685167ms to LocalClient.Create
	I0624 03:42:31.123326    7505 start.go:128] duration metric: took 2.403989875s to createHost
	I0624 03:42:31.123373    7505 start.go:83] releasing machines lock for "no-preload-030000", held for 2.404459916s
	W0624 03:42:31.123709    7505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:31.139223    7505 out.go:177] 
	W0624 03:42:31.143314    7505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:31.143344    7505 out.go:239] * 
	* 
	W0624 03:42:31.146237    7505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:31.160178    7505 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (67.373625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-030000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-030000 create -f testdata/busybox.yaml: exit status 1 (29.497958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-030000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (29.911042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (30.579958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-030000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-030000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-030000 describe deploy/metrics-server -n kube-system: exit status 1 (26.82775ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-030000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (30.818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.188645458s)

                                                
                                                
-- stdout --
	* [no-preload-030000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-030000" primary control-plane node in "no-preload-030000" cluster
	* Restarting existing qemu2 VM for "no-preload-030000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-030000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:35.193108    7581 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:35.193240    7581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:35.193244    7581 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:35.193246    7581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:35.193386    7581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:35.194381    7581 out.go:298] Setting JSON to false
	I0624 03:42:35.210427    7581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6125,"bootTime":1719219630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:35.210495    7581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:35.214060    7581 out.go:177] * [no-preload-030000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:35.220940    7581 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:35.220985    7581 notify.go:220] Checking for updates...
	I0624 03:42:35.227847    7581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:35.230963    7581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:35.234002    7581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:35.235413    7581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:35.238938    7581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:35.242201    7581 config.go:182] Loaded profile config "no-preload-030000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:35.242469    7581 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:35.246797    7581 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:42:35.253948    7581 start.go:297] selected driver: qemu2
	I0624 03:42:35.253955    7581 start.go:901] validating driver "qemu2" against &{Name:no-preload-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:no-preload-030000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:35.254015    7581 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:35.256370    7581 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:35.256417    7581 cni.go:84] Creating CNI manager for ""
	I0624 03:42:35.256425    7581 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:42:35.256475    7581 start.go:340] cluster config:
	{Name:no-preload-030000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-030000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:35.260696    7581 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.267905    7581 out.go:177] * Starting "no-preload-030000" primary control-plane node in "no-preload-030000" cluster
	I0624 03:42:35.271963    7581 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:42:35.272031    7581 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/no-preload-030000/config.json ...
	I0624 03:42:35.272087    7581 cache.go:107] acquiring lock: {Name:mked59fb8aa75320154cc5604c97a69c9d3437cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272086    7581 cache.go:107] acquiring lock: {Name:mk6cf6a6905df2adb42543bb2647437945550b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272105    7581 cache.go:107] acquiring lock: {Name:mk51f18764a008898c34dee6298b6220315184df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272157    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0624 03:42:35.272162    7581 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.75µs
	I0624 03:42:35.272162    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0624 03:42:35.272203    7581 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 118.25µs
	I0624 03:42:35.272207    7581 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0624 03:42:35.272168    7581 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0624 03:42:35.272176    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0624 03:42:35.272213    7581 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 157.416µs
	I0624 03:42:35.272216    7581 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0624 03:42:35.272214    7581 cache.go:107] acquiring lock: {Name:mk5d75032ddd50cc8aa4ef3f3329b1df091d3214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272186    7581 cache.go:107] acquiring lock: {Name:mk2591e426be0a7c3c0353cc5a77244e3fa7c2ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272235    7581 cache.go:107] acquiring lock: {Name:mkf8624c530e1bef5d15b839b2cd060a259387cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272186    7581 cache.go:107] acquiring lock: {Name:mk339fc1506bbaaf19f9fe19e8b2bd5bf5e38533 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272174    7581 cache.go:107] acquiring lock: {Name:mkdd95741bd18a4894e3df918851f9bb89c5ba0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:35.272278    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0624 03:42:35.272282    7581 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 68.209µs
	I0624 03:42:35.272289    7581 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0624 03:42:35.272293    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0624 03:42:35.272302    7581 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 117.333µs
	I0624 03:42:35.272306    7581 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0624 03:42:35.272318    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0624 03:42:35.272315    7581 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0624 03:42:35.272343    7581 cache.go:115] /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0624 03:42:35.272348    7581 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 150.417µs
	I0624 03:42:35.272351    7581 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0624 03:42:35.272332    7581 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 147.167µs
	I0624 03:42:35.272366    7581 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0624 03:42:35.272462    7581 start.go:360] acquireMachinesLock for no-preload-030000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:35.272493    7581 start.go:364] duration metric: took 24.916µs to acquireMachinesLock for "no-preload-030000"
	I0624 03:42:35.272501    7581 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:35.272509    7581 fix.go:54] fixHost starting: 
	I0624 03:42:35.272623    7581 fix.go:112] recreateIfNeeded on no-preload-030000: state=Stopped err=<nil>
	W0624 03:42:35.272631    7581 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:35.280972    7581 out.go:177] * Restarting existing qemu2 VM for "no-preload-030000" ...
	I0624 03:42:35.284969    7581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:a6:ed:98:2a:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:35.285340    7581 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0624 03:42:35.287140    7581 main.go:141] libmachine: STDOUT: 
	I0624 03:42:35.287160    7581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:35.287187    7581 fix.go:56] duration metric: took 14.677959ms for fixHost
	I0624 03:42:35.287192    7581 start.go:83] releasing machines lock for "no-preload-030000", held for 14.694792ms
	W0624 03:42:35.287197    7581 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:35.287229    7581 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:35.287235    7581 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:36.125236    7581 cache.go:162] opening:  /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0624 03:42:40.287658    7581 start.go:360] acquireMachinesLock for no-preload-030000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:40.288097    7581 start.go:364] duration metric: took 365.666µs to acquireMachinesLock for "no-preload-030000"
	I0624 03:42:40.288243    7581 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:40.288272    7581 fix.go:54] fixHost starting: 
	I0624 03:42:40.288990    7581 fix.go:112] recreateIfNeeded on no-preload-030000: state=Stopped err=<nil>
	W0624 03:42:40.289016    7581 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:40.293466    7581 out.go:177] * Restarting existing qemu2 VM for "no-preload-030000" ...
	I0624 03:42:40.302715    7581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:a6:ed:98:2a:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/no-preload-030000/disk.qcow2
	I0624 03:42:40.313317    7581 main.go:141] libmachine: STDOUT: 
	I0624 03:42:40.313379    7581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:40.313470    7581 fix.go:56] duration metric: took 25.200084ms for fixHost
	I0624 03:42:40.313491    7581 start.go:83] releasing machines lock for "no-preload-030000", held for 25.367958ms
	W0624 03:42:40.313645    7581 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-030000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-030000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:40.321507    7581 out.go:177] 
	W0624 03:42:40.324656    7581 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:40.324688    7581 out.go:239] * 
	* 
	W0624 03:42:40.327359    7581 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:40.336431    7581 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-030000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (68.516958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-030000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (32.265458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-030000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.904334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-030000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-030000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (30.298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-030000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (29.708458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-030000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-030000 --alsologtostderr -v=1: exit status 83 (42.872125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-030000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:40.609288    7607 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:40.609423    7607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:40.609426    7607 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:40.609428    7607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:40.609546    7607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:40.609767    7607 out.go:298] Setting JSON to false
	I0624 03:42:40.609773    7607 mustload.go:65] Loading cluster: no-preload-030000
	I0624 03:42:40.609949    7607 config.go:182] Loaded profile config "no-preload-030000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:40.614876    7607 out.go:177] * The control-plane node no-preload-030000 host is not running: state=Stopped
	I0624 03:42:40.618839    7607 out.go:177]   To start a cluster, run: "minikube start -p no-preload-030000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-030000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (29.960625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (30.328125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.818676792s)

                                                
                                                
-- stdout --
	* [embed-certs-589000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-589000" primary control-plane node in "embed-certs-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:41.065155    7630 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:41.065278    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:41.065284    7630 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:41.065286    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:41.065404    7630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:41.066458    7630 out.go:298] Setting JSON to false
	I0624 03:42:41.082296    7630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6131,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:41.082358    7630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:41.087664    7630 out.go:177] * [embed-certs-589000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:41.093728    7630 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:41.093825    7630 notify.go:220] Checking for updates...
	I0624 03:42:41.100652    7630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:41.103638    7630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:41.106587    7630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:41.109605    7630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:41.112649    7630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:41.115861    7630 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:41.115929    7630 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:41.115972    7630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:41.119531    7630 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:42:41.126622    7630 start.go:297] selected driver: qemu2
	I0624 03:42:41.126629    7630 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:42:41.126636    7630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:41.128782    7630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:42:41.131576    7630 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:42:41.134700    7630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:41.134716    7630 cni.go:84] Creating CNI manager for ""
	I0624 03:42:41.134724    7630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:42:41.134729    7630 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:42:41.134761    7630 start.go:340] cluster config:
	{Name:embed-certs-589000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:41.139213    7630 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:41.146550    7630 out.go:177] * Starting "embed-certs-589000" primary control-plane node in "embed-certs-589000" cluster
	I0624 03:42:41.149556    7630 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:42:41.149571    7630 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:42:41.149580    7630 cache.go:56] Caching tarball of preloaded images
	I0624 03:42:41.149645    7630 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:42:41.149651    7630 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:42:41.149724    7630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/embed-certs-589000/config.json ...
	I0624 03:42:41.149740    7630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/embed-certs-589000/config.json: {Name:mk7154e2194963de13ed50172f7501979f600731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:42:41.150030    7630 start.go:360] acquireMachinesLock for embed-certs-589000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:41.150063    7630 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "embed-certs-589000"
	I0624 03:42:41.150074    7630 start.go:93] Provisioning new machine with config: &{Name:embed-certs-589000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:41.150102    7630 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:41.157475    7630 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:41.174796    7630 start.go:159] libmachine.API.Create for "embed-certs-589000" (driver="qemu2")
	I0624 03:42:41.174832    7630 client.go:168] LocalClient.Create starting
	I0624 03:42:41.174894    7630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:41.174925    7630 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:41.174937    7630 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:41.174980    7630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:41.175003    7630 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:41.175012    7630 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:41.175439    7630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:41.318162    7630 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:41.405475    7630 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:41.405481    7630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:41.405704    7630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:41.418240    7630 main.go:141] libmachine: STDOUT: 
	I0624 03:42:41.418260    7630 main.go:141] libmachine: STDERR: 
	I0624 03:42:41.418304    7630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2 +20000M
	I0624 03:42:41.429032    7630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:41.429055    7630 main.go:141] libmachine: STDERR: 
	I0624 03:42:41.429075    7630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:41.429080    7630 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:41.429110    7630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:54:cf:d6:96:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:41.430789    7630 main.go:141] libmachine: STDOUT: 
	I0624 03:42:41.430801    7630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:41.430821    7630 client.go:171] duration metric: took 255.985041ms to LocalClient.Create
	I0624 03:42:43.433014    7630 start.go:128] duration metric: took 2.282909541s to createHost
	I0624 03:42:43.433155    7630 start.go:83] releasing machines lock for "embed-certs-589000", held for 2.283031792s
	W0624 03:42:43.433218    7630 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:43.444414    7630 out.go:177] * Deleting "embed-certs-589000" in qemu2 ...
	W0624 03:42:43.481649    7630 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:43.481685    7630 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:48.483777    7630 start.go:360] acquireMachinesLock for embed-certs-589000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:48.484252    7630 start.go:364] duration metric: took 391.75µs to acquireMachinesLock for "embed-certs-589000"
	I0624 03:42:48.484388    7630 start.go:93] Provisioning new machine with config: &{Name:embed-certs-589000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:48.484657    7630 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:48.490275    7630 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:48.538358    7630 start.go:159] libmachine.API.Create for "embed-certs-589000" (driver="qemu2")
	I0624 03:42:48.538402    7630 client.go:168] LocalClient.Create starting
	I0624 03:42:48.538534    7630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:48.538599    7630 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:48.538622    7630 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:48.538680    7630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:48.538726    7630 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:48.538739    7630 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:48.539894    7630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:48.700692    7630 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:48.780178    7630 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:48.780185    7630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:48.780393    7630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:48.792826    7630 main.go:141] libmachine: STDOUT: 
	I0624 03:42:48.792845    7630 main.go:141] libmachine: STDERR: 
	I0624 03:42:48.792896    7630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2 +20000M
	I0624 03:42:48.803872    7630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:48.803890    7630 main.go:141] libmachine: STDERR: 
	I0624 03:42:48.803905    7630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:48.803909    7630 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:48.803944    7630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:dd:9d:d0:a6:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:48.805670    7630 main.go:141] libmachine: STDOUT: 
	I0624 03:42:48.805691    7630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:48.805704    7630 client.go:171] duration metric: took 267.2985ms to LocalClient.Create
	I0624 03:42:50.807861    7630 start.go:128] duration metric: took 2.323193s to createHost
	I0624 03:42:50.807927    7630 start.go:83] releasing machines lock for "embed-certs-589000", held for 2.323663875s
	W0624 03:42:50.808251    7630 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:50.821864    7630 out.go:177] 
	W0624 03:42:50.826050    7630 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:50.826074    7630 out.go:239] * 
	* 
	W0624 03:42:50.828643    7630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:50.841900    7630 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (67.12625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-589000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-589000 create -f testdata/busybox.yaml: exit status 1 (29.272792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-589000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-589000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (30.784583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (30.222208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-589000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-589000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-589000 describe deploy/metrics-server -n kube-system: exit status 1 (26.182917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-589000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-589000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (29.873542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.184990917s)

                                                
                                                
-- stdout --
	* [embed-certs-589000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-589000" primary control-plane node in "embed-certs-589000" cluster
	* Restarting existing qemu2 VM for "embed-certs-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:53.204737    7672 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:53.204869    7672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:53.204873    7672 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:53.204875    7672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:53.205010    7672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:53.205980    7672 out.go:298] Setting JSON to false
	I0624 03:42:53.221786    7672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6143,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:53.221859    7672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:53.226028    7672 out.go:177] * [embed-certs-589000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:53.233126    7672 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:53.233163    7672 notify.go:220] Checking for updates...
	I0624 03:42:53.240030    7672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:53.243091    7672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:53.246120    7672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:53.249053    7672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:53.252124    7672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:53.255389    7672 config.go:182] Loaded profile config "embed-certs-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:53.255658    7672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:53.259052    7672 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:42:53.265974    7672 start.go:297] selected driver: qemu2
	I0624 03:42:53.265980    7672 start.go:901] validating driver "qemu2" against &{Name:embed-certs-589000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:embed-certs-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:53.266059    7672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:53.268387    7672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:53.268427    7672 cni.go:84] Creating CNI manager for ""
	I0624 03:42:53.268434    7672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:42:53.268460    7672 start.go:340] cluster config:
	{Name:embed-certs-589000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-589000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:53.272935    7672 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:53.280954    7672 out.go:177] * Starting "embed-certs-589000" primary control-plane node in "embed-certs-589000" cluster
	I0624 03:42:53.285052    7672 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:42:53.285067    7672 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:42:53.285077    7672 cache.go:56] Caching tarball of preloaded images
	I0624 03:42:53.285137    7672 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:42:53.285142    7672 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:42:53.285207    7672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/embed-certs-589000/config.json ...
	I0624 03:42:53.285657    7672 start.go:360] acquireMachinesLock for embed-certs-589000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:53.285684    7672 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "embed-certs-589000"
	I0624 03:42:53.285693    7672 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:53.285700    7672 fix.go:54] fixHost starting: 
	I0624 03:42:53.285815    7672 fix.go:112] recreateIfNeeded on embed-certs-589000: state=Stopped err=<nil>
	W0624 03:42:53.285822    7672 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:53.289899    7672 out.go:177] * Restarting existing qemu2 VM for "embed-certs-589000" ...
	I0624 03:42:53.298074    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:dd:9d:d0:a6:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:53.300025    7672 main.go:141] libmachine: STDOUT: 
	I0624 03:42:53.300041    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:53.300070    7672 fix.go:56] duration metric: took 14.371167ms for fixHost
	I0624 03:42:53.300075    7672 start.go:83] releasing machines lock for "embed-certs-589000", held for 14.386333ms
	W0624 03:42:53.300081    7672 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:53.300110    7672 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:53.300115    7672 start.go:728] Will try again in 5 seconds ...
	I0624 03:42:58.302123    7672 start.go:360] acquireMachinesLock for embed-certs-589000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:58.302489    7672 start.go:364] duration metric: took 296.959µs to acquireMachinesLock for "embed-certs-589000"
	I0624 03:42:58.302629    7672 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:42:58.302648    7672 fix.go:54] fixHost starting: 
	I0624 03:42:58.303338    7672 fix.go:112] recreateIfNeeded on embed-certs-589000: state=Stopped err=<nil>
	W0624 03:42:58.303365    7672 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:42:58.307701    7672 out.go:177] * Restarting existing qemu2 VM for "embed-certs-589000" ...
	I0624 03:42:58.315952    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:dd:9d:d0:a6:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/embed-certs-589000/disk.qcow2
	I0624 03:42:58.324622    7672 main.go:141] libmachine: STDOUT: 
	I0624 03:42:58.324671    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:58.324729    7672 fix.go:56] duration metric: took 22.082792ms for fixHost
	I0624 03:42:58.324745    7672 start.go:83] releasing machines lock for "embed-certs-589000", held for 22.233916ms
	W0624 03:42:58.324909    7672 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-589000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-589000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:42:58.333644    7672 out.go:177] 
	W0624 03:42:58.337747    7672 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:42:58.337801    7672 out.go:239] * 
	* 
	W0624 03:42:58.340734    7672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:42:58.348623    7672 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-589000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (70.031458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-589000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (33.289208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-589000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-589000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-589000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.588833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-589000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-589000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (30.738459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-589000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (29.881708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-589000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-589000 --alsologtostderr -v=1: exit status 83 (42.759792ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-589000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-589000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:58.619505    7691 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:58.619667    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:58.619670    7691 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:58.619673    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:58.619802    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:58.620013    7691 out.go:298] Setting JSON to false
	I0624 03:42:58.620019    7691 mustload.go:65] Loading cluster: embed-certs-589000
	I0624 03:42:58.620200    7691 config.go:182] Loaded profile config "embed-certs-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:58.624981    7691 out.go:177] * The control-plane node embed-certs-589000 host is not running: state=Stopped
	I0624 03:42:58.628883    7691 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-589000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-589000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (30.546291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (30.055333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.838859667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:42:59.299899    7726 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:42:59.300061    7726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:59.300064    7726 out.go:304] Setting ErrFile to fd 2...
	I0624 03:42:59.300066    7726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:42:59.300195    7726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:42:59.301270    7726 out.go:298] Setting JSON to false
	I0624 03:42:59.317288    7726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6149,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:42:59.317358    7726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:42:59.322385    7726 out.go:177] * [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:42:59.328519    7726 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:42:59.328562    7726 notify.go:220] Checking for updates...
	I0624 03:42:59.336381    7726 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:42:59.339442    7726 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:42:59.342509    7726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:42:59.345416    7726 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:42:59.348466    7726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:42:59.351792    7726 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:59.351863    7726 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:42:59.351919    7726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:42:59.355411    7726 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:42:59.362337    7726 start.go:297] selected driver: qemu2
	I0624 03:42:59.362344    7726 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:42:59.362350    7726 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:42:59.364591    7726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:42:59.367448    7726 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:42:59.371499    7726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:42:59.371546    7726 cni.go:84] Creating CNI manager for ""
	I0624 03:42:59.371557    7726 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:42:59.371562    7726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:42:59.371609    7726 start.go:340] cluster config:
	{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:42:59.376055    7726 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:42:59.384469    7726 out.go:177] * Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	I0624 03:42:59.388379    7726 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:42:59.388394    7726 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:42:59.388403    7726 cache.go:56] Caching tarball of preloaded images
	I0624 03:42:59.388476    7726 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:42:59.388500    7726 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:42:59.388584    7726 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/default-k8s-diff-port-353000/config.json ...
	I0624 03:42:59.388595    7726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/default-k8s-diff-port-353000/config.json: {Name:mk3b5cc300b0598ef9a7320755d02765dc681f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:42:59.388944    7726 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:42:59.388982    7726 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0624 03:42:59.388993    7726 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:42:59.389027    7726 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:42:59.393440    7726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:42:59.411231    7726 start.go:159] libmachine.API.Create for "default-k8s-diff-port-353000" (driver="qemu2")
	I0624 03:42:59.411254    7726 client.go:168] LocalClient.Create starting
	I0624 03:42:59.411317    7726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:42:59.411351    7726 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:59.411363    7726 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:59.411402    7726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:42:59.411429    7726 main.go:141] libmachine: Decoding PEM data...
	I0624 03:42:59.411439    7726 main.go:141] libmachine: Parsing certificate...
	I0624 03:42:59.411909    7726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:42:59.555336    7726 main.go:141] libmachine: Creating SSH key...
	I0624 03:42:59.616981    7726 main.go:141] libmachine: Creating Disk image...
	I0624 03:42:59.616987    7726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:42:59.617211    7726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:42:59.629937    7726 main.go:141] libmachine: STDOUT: 
	I0624 03:42:59.629954    7726 main.go:141] libmachine: STDERR: 
	I0624 03:42:59.630014    7726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2 +20000M
	I0624 03:42:59.641020    7726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:42:59.641034    7726 main.go:141] libmachine: STDERR: 
	I0624 03:42:59.641058    7726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:42:59.641069    7726 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:42:59.641104    7726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:b9:bc:e8:74:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:42:59.642735    7726 main.go:141] libmachine: STDOUT: 
	I0624 03:42:59.642747    7726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:42:59.642772    7726 client.go:171] duration metric: took 231.513959ms to LocalClient.Create
	I0624 03:43:01.644932    7726 start.go:128] duration metric: took 2.255898583s to createHost
	I0624 03:43:01.645006    7726 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 2.256033125s
	W0624 03:43:01.645065    7726 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:01.662398    7726 out.go:177] * Deleting "default-k8s-diff-port-353000" in qemu2 ...
	W0624 03:43:01.691388    7726 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:01.691427    7726 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:06.693631    7726 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:06.694040    7726 start.go:364] duration metric: took 342.875µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0624 03:43:06.694161    7726 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:06.694413    7726 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:06.712012    7726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:43:06.761448    7726 start.go:159] libmachine.API.Create for "default-k8s-diff-port-353000" (driver="qemu2")
	I0624 03:43:06.761494    7726 client.go:168] LocalClient.Create starting
	I0624 03:43:06.761602    7726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:06.761674    7726 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:06.761689    7726 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:06.761752    7726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:06.761796    7726 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:06.761807    7726 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:06.762310    7726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:06.913828    7726 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:07.042300    7726 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:07.042305    7726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:07.042504    7726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:43:07.055281    7726 main.go:141] libmachine: STDOUT: 
	I0624 03:43:07.055340    7726 main.go:141] libmachine: STDERR: 
	I0624 03:43:07.055388    7726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2 +20000M
	I0624 03:43:07.066307    7726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:07.066329    7726 main.go:141] libmachine: STDERR: 
	I0624 03:43:07.066343    7726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:43:07.066350    7726 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:07.066392    7726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:88:af:6f:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:43:07.068091    7726 main.go:141] libmachine: STDOUT: 
	I0624 03:43:07.068105    7726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:07.068119    7726 client.go:171] duration metric: took 306.621709ms to LocalClient.Create
	I0624 03:43:09.070448    7726 start.go:128] duration metric: took 2.376001s to createHost
	I0624 03:43:09.070542    7726 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 2.376496166s
	W0624 03:43:09.070944    7726 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:09.076881    7726 out.go:177] 
	W0624 03:43:09.084904    7726 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:09.084929    7726 out.go:239] * 
	* 
	W0624 03:43:09.087837    7726 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:09.095644    7726 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (65.720667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml: exit status 1 (28.976792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (30.384084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (30.41875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-353000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system: exit status 1 (27.314208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (30.529166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.179630292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:11.639099    7768 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:11.639234    7768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:11.639236    7768 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:11.639239    7768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:11.639389    7768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:11.640427    7768 out.go:298] Setting JSON to false
	I0624 03:43:11.656515    7768 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6161,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:11.656581    7768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:11.660621    7768 out.go:177] * [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:11.668594    7768 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:11.668672    7768 notify.go:220] Checking for updates...
	I0624 03:43:11.675551    7768 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:43:11.678588    7768 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:43:11.681534    7768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:11.684546    7768 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:43:11.687583    7768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:11.690839    7768 config.go:182] Loaded profile config "default-k8s-diff-port-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:11.691099    7768 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:11.694538    7768 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:43:11.701551    7768 start.go:297] selected driver: qemu2
	I0624 03:43:11.701559    7768 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:11.701638    7768 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:11.704169    7768 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:43:11.704217    7768 cni.go:84] Creating CNI manager for ""
	I0624 03:43:11.704225    7768 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:11.704249    7768 start.go:340] cluster config:
	{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-353000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:11.708778    7768 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:11.712573    7768 out.go:177] * Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	I0624 03:43:11.716502    7768 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:11.716517    7768 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:43:11.716525    7768 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:11.716583    7768 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:43:11.716589    7768 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:11.716665    7768 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/default-k8s-diff-port-353000/config.json ...
	I0624 03:43:11.717012    7768 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:11.717039    7768 start.go:364] duration metric: took 21.416µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0624 03:43:11.717048    7768 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:11.717054    7768 fix.go:54] fixHost starting: 
	I0624 03:43:11.717168    7768 fix.go:112] recreateIfNeeded on default-k8s-diff-port-353000: state=Stopped err=<nil>
	W0624 03:43:11.717177    7768 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:11.721579    7768 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	I0624 03:43:11.728564    7768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:88:af:6f:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:43:11.730568    7768 main.go:141] libmachine: STDOUT: 
	I0624 03:43:11.730588    7768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:11.730616    7768 fix.go:56] duration metric: took 13.563042ms for fixHost
	I0624 03:43:11.730621    7768 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 13.577125ms
	W0624 03:43:11.730627    7768 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:11.730659    7768 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:11.730664    7768 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:16.732815    7768 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:16.733217    7768 start.go:364] duration metric: took 307.167µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0624 03:43:16.733331    7768 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:16.733349    7768 fix.go:54] fixHost starting: 
	I0624 03:43:16.734039    7768 fix.go:112] recreateIfNeeded on default-k8s-diff-port-353000: state=Stopped err=<nil>
	W0624 03:43:16.734065    7768 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:16.743595    7768 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	I0624 03:43:16.746871    7768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:88:af:6f:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0624 03:43:16.755684    7768 main.go:141] libmachine: STDOUT: 
	I0624 03:43:16.755752    7768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:16.755841    7768 fix.go:56] duration metric: took 22.491583ms for fixHost
	I0624 03:43:16.755862    7768 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 22.619542ms
	W0624 03:43:16.756032    7768 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:16.762624    7768 out.go:177] 
	W0624 03:43:16.766681    7768 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:16.766703    7768 out.go:239] * 
	* 
	W0624 03:43:16.769148    7768 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:16.776679    7768 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (68.653375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-353000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (32.350791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-353000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.635083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (30.361041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-353000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (29.550541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1: exit status 83 (40.776625ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-353000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:17.045532    7787 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:17.045688    7787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:17.045691    7787 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:17.045694    7787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:17.045816    7787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:17.046043    7787 out.go:298] Setting JSON to false
	I0624 03:43:17.046050    7787 mustload.go:65] Loading cluster: default-k8s-diff-port-353000
	I0624 03:43:17.046254    7787 config.go:182] Loaded profile config "default-k8s-diff-port-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:17.050620    7787 out.go:177] * The control-plane node default-k8s-diff-port-353000 host is not running: state=Stopped
	I0624 03:43:17.054720    7787 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-353000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (29.542417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (29.849125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.983923375s)

                                                
                                                
-- stdout --
	* [newest-cni-744000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-744000" primary control-plane node in "newest-cni-744000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:17.496396    7810 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:17.496526    7810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:17.496529    7810 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:17.496532    7810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:17.496684    7810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:17.497724    7810 out.go:298] Setting JSON to false
	I0624 03:43:17.513865    7810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6167,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:17.513923    7810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:17.518832    7810 out.go:177] * [newest-cni-744000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:17.525932    7810 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:17.525995    7810 notify.go:220] Checking for updates...
	I0624 03:43:17.533868    7810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:43:17.536975    7810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:43:17.539891    7810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:17.542949    7810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:43:17.545935    7810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:17.549193    7810 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:17.549255    7810 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:17.549312    7810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:17.552862    7810 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:43:17.559860    7810 start.go:297] selected driver: qemu2
	I0624 03:43:17.559867    7810 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:43:17.559874    7810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:17.562339    7810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0624 03:43:17.562363    7810 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0624 03:43:17.565842    7810 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:43:17.572895    7810 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0624 03:43:17.572911    7810 cni.go:84] Creating CNI manager for ""
	I0624 03:43:17.572918    7810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:17.572922    7810 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:43:17.572944    7810 start.go:340] cluster config:
	{Name:newest-cni-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:17.577396    7810 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:17.585758    7810 out.go:177] * Starting "newest-cni-744000" primary control-plane node in "newest-cni-744000" cluster
	I0624 03:43:17.589871    7810 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:17.589887    7810 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:43:17.589895    7810 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:17.589959    7810 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:43:17.589966    7810 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:17.590032    7810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/newest-cni-744000/config.json ...
	I0624 03:43:17.590043    7810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/newest-cni-744000/config.json: {Name:mkcc60579e9eaefc1559cf3d495745d713bef4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:43:17.590373    7810 start.go:360] acquireMachinesLock for newest-cni-744000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:17.590406    7810 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "newest-cni-744000"
	I0624 03:43:17.590416    7810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:17.590442    7810 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:17.596903    7810 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:43:17.614458    7810 start.go:159] libmachine.API.Create for "newest-cni-744000" (driver="qemu2")
	I0624 03:43:17.614484    7810 client.go:168] LocalClient.Create starting
	I0624 03:43:17.614538    7810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:17.614570    7810 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:17.614587    7810 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:17.614624    7810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:17.614648    7810 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:17.614656    7810 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:17.615087    7810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:17.759554    7810 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:17.945299    7810 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:17.945305    7810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:17.945816    7810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:17.959008    7810 main.go:141] libmachine: STDOUT: 
	I0624 03:43:17.959030    7810 main.go:141] libmachine: STDERR: 
	I0624 03:43:17.959094    7810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2 +20000M
	I0624 03:43:17.969916    7810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:17.969931    7810 main.go:141] libmachine: STDERR: 
	I0624 03:43:17.969954    7810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:17.969959    7810 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:17.969999    7810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:2c:3e:b9:2c:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:17.971682    7810 main.go:141] libmachine: STDOUT: 
	I0624 03:43:17.971697    7810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:17.971717    7810 client.go:171] duration metric: took 357.230375ms to LocalClient.Create
	I0624 03:43:19.973874    7810 start.go:128] duration metric: took 2.383430459s to createHost
	I0624 03:43:19.973932    7810 start.go:83] releasing machines lock for "newest-cni-744000", held for 2.383537583s
	W0624 03:43:19.974027    7810 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:19.984076    7810 out.go:177] * Deleting "newest-cni-744000" in qemu2 ...
	W0624 03:43:20.020454    7810 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:20.020474    7810 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:25.022572    7810 start.go:360] acquireMachinesLock for newest-cni-744000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:25.023140    7810 start.go:364] duration metric: took 400.5µs to acquireMachinesLock for "newest-cni-744000"
	I0624 03:43:25.023310    7810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:25.023634    7810 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:25.030311    7810 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 03:43:25.079817    7810 start.go:159] libmachine.API.Create for "newest-cni-744000" (driver="qemu2")
	I0624 03:43:25.079860    7810 client.go:168] LocalClient.Create starting
	I0624 03:43:25.079966    7810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:25.080021    7810 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:25.080038    7810 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:25.080099    7810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:25.080143    7810 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:25.080153    7810 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:25.080830    7810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:25.233569    7810 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:25.378591    7810 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:25.378597    7810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:25.378822    7810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:25.391655    7810 main.go:141] libmachine: STDOUT: 
	I0624 03:43:25.391676    7810 main.go:141] libmachine: STDERR: 
	I0624 03:43:25.391750    7810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2 +20000M
	I0624 03:43:25.402714    7810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:25.402730    7810 main.go:141] libmachine: STDERR: 
	I0624 03:43:25.402742    7810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:25.402747    7810 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:25.402775    7810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:c1:48:25:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:25.404525    7810 main.go:141] libmachine: STDOUT: 
	I0624 03:43:25.404538    7810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:25.404550    7810 client.go:171] duration metric: took 324.68825ms to LocalClient.Create
	I0624 03:43:27.406716    7810 start.go:128] duration metric: took 2.383072916s to createHost
	I0624 03:43:27.406778    7810 start.go:83] releasing machines lock for "newest-cni-744000", held for 2.383584666s
	W0624 03:43:27.407199    7810 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:27.421860    7810 out.go:177] 
	W0624 03:43:27.427049    7810 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:27.427100    7810 out.go:239] * 
	* 
	W0624 03:43:27.429656    7810 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:27.438816    7810 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (67.10875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-744000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.182633917s)

                                                
                                                
-- stdout --
	* [newest-cni-744000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-744000" primary control-plane node in "newest-cni-744000" cluster
	* Restarting existing qemu2 VM for "newest-cni-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:29.840830    7846 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:29.840947    7846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:29.840950    7846 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:29.840953    7846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:29.841080    7846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:29.842094    7846 out.go:298] Setting JSON to false
	I0624 03:43:29.859133    7846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6179,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:29.859215    7846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:29.864623    7846 out.go:177] * [newest-cni-744000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:29.871545    7846 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:29.871591    7846 notify.go:220] Checking for updates...
	I0624 03:43:29.879475    7846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:43:29.882554    7846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:43:29.885543    7846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:29.888548    7846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:43:29.891527    7846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:29.894738    7846 config.go:182] Loaded profile config "newest-cni-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:29.894990    7846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:29.899508    7846 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:43:29.906527    7846 start.go:297] selected driver: qemu2
	I0624 03:43:29.906534    7846 start.go:901] validating driver "qemu2" against &{Name:newest-cni-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:newest-cni-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:29.906589    7846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:29.909097    7846 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0624 03:43:29.909141    7846 cni.go:84] Creating CNI manager for ""
	I0624 03:43:29.909148    7846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:29.909175    7846 start.go:340] cluster config:
	{Name:newest-cni-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-744000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:29.913514    7846 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:29.921548    7846 out.go:177] * Starting "newest-cni-744000" primary control-plane node in "newest-cni-744000" cluster
	I0624 03:43:29.925567    7846 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:29.925582    7846 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:43:29.925590    7846 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:29.925677    7846 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:43:29.925685    7846 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:29.925747    7846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/newest-cni-744000/config.json ...
	I0624 03:43:29.926165    7846 start.go:360] acquireMachinesLock for newest-cni-744000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:29.926195    7846 start.go:364] duration metric: took 23.708µs to acquireMachinesLock for "newest-cni-744000"
	I0624 03:43:29.926203    7846 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:29.926210    7846 fix.go:54] fixHost starting: 
	I0624 03:43:29.926327    7846 fix.go:112] recreateIfNeeded on newest-cni-744000: state=Stopped err=<nil>
	W0624 03:43:29.926335    7846 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:29.930550    7846 out.go:177] * Restarting existing qemu2 VM for "newest-cni-744000" ...
	I0624 03:43:29.937526    7846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:c1:48:25:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:29.939469    7846 main.go:141] libmachine: STDOUT: 
	I0624 03:43:29.939484    7846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:29.939509    7846 fix.go:56] duration metric: took 13.3015ms for fixHost
	I0624 03:43:29.939514    7846 start.go:83] releasing machines lock for "newest-cni-744000", held for 13.314375ms
	W0624 03:43:29.939521    7846 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:29.939554    7846 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:29.939559    7846 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:34.941681    7846 start.go:360] acquireMachinesLock for newest-cni-744000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:34.942078    7846 start.go:364] duration metric: took 289.292µs to acquireMachinesLock for "newest-cni-744000"
	I0624 03:43:34.942217    7846 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:34.942238    7846 fix.go:54] fixHost starting: 
	I0624 03:43:34.942963    7846 fix.go:112] recreateIfNeeded on newest-cni-744000: state=Stopped err=<nil>
	W0624 03:43:34.942988    7846 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:34.947480    7846 out.go:177] * Restarting existing qemu2 VM for "newest-cni-744000" ...
	I0624 03:43:34.952701    7846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:c1:48:25:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/newest-cni-744000/disk.qcow2
	I0624 03:43:34.961707    7846 main.go:141] libmachine: STDOUT: 
	I0624 03:43:34.961785    7846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:34.961877    7846 fix.go:56] duration metric: took 19.640208ms for fixHost
	I0624 03:43:34.961901    7846 start.go:83] releasing machines lock for "newest-cni-744000", held for 19.797708ms
	W0624 03:43:34.962154    7846 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:34.970376    7846 out.go:177] 
	W0624 03:43:34.971842    7846 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:34.971862    7846 out.go:239] * 
	* 
	W0624 03:43:34.974345    7846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:34.982388    7846 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-744000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (68.858458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-744000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-744000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (29.742541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-744000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-744000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-744000 --alsologtostderr -v=1: exit status 83 (41.720875ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-744000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:35.165396    7860 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:35.165771    7860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:35.165775    7860 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:35.165777    7860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:35.165974    7860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:35.166238    7860 out.go:298] Setting JSON to false
	I0624 03:43:35.166246    7860 mustload.go:65] Loading cluster: newest-cni-744000
	I0624 03:43:35.166586    7860 config.go:182] Loaded profile config "newest-cni-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:35.169465    7860 out.go:177] * The control-plane node newest-cni-744000 host is not running: state=Stopped
	I0624 03:43:35.173375    7860 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-744000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-744000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (30.4165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-744000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (29.685125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-744000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.877223583s)

                                                
                                                
-- stdout --
	* [auto-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-871000" primary control-plane node in "auto-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:35.624964    7883 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:35.625088    7883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:35.625091    7883 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:35.625097    7883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:35.625248    7883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:35.626346    7883 out.go:298] Setting JSON to false
	I0624 03:43:35.642397    7883 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6185,"bootTime":1719219630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:35.642461    7883 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:35.646601    7883 out.go:177] * [auto-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:35.652522    7883 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:35.652560    7883 notify.go:220] Checking for updates...
	I0624 03:43:35.658523    7883 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:43:35.661532    7883 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:43:35.662896    7883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:35.669699    7883 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:43:35.672558    7883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:35.675783    7883 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:35.675841    7883 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:35.675891    7883 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:35.679488    7883 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:43:35.686472    7883 start.go:297] selected driver: qemu2
	I0624 03:43:35.686478    7883 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:43:35.686484    7883 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:35.688839    7883 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:43:35.691531    7883 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:43:35.694561    7883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:43:35.694580    7883 cni.go:84] Creating CNI manager for ""
	I0624 03:43:35.694586    7883 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:35.694590    7883 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:43:35.694617    7883 start.go:340] cluster config:
	{Name:auto-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:35.699126    7883 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:35.706507    7883 out.go:177] * Starting "auto-871000" primary control-plane node in "auto-871000" cluster
	I0624 03:43:35.710562    7883 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:35.710578    7883 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:43:35.710587    7883 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:35.710657    7883 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:43:35.710662    7883 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:35.710726    7883 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/auto-871000/config.json ...
	I0624 03:43:35.710738    7883 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/auto-871000/config.json: {Name:mkfc28cc7f42d159313a90ae07da2d3483e3ee50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:43:35.711051    7883 start.go:360] acquireMachinesLock for auto-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:35.711087    7883 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "auto-871000"
	I0624 03:43:35.711098    7883 start.go:93] Provisioning new machine with config: &{Name:auto-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:35.711138    7883 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:35.719540    7883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:43:35.737317    7883 start.go:159] libmachine.API.Create for "auto-871000" (driver="qemu2")
	I0624 03:43:35.737353    7883 client.go:168] LocalClient.Create starting
	I0624 03:43:35.737440    7883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:35.737473    7883 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:35.737483    7883 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:35.737522    7883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:35.737546    7883 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:35.737561    7883 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:35.737915    7883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:35.882353    7883 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:36.014892    7883 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:36.014897    7883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:36.015128    7883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:36.028013    7883 main.go:141] libmachine: STDOUT: 
	I0624 03:43:36.028029    7883 main.go:141] libmachine: STDERR: 
	I0624 03:43:36.028082    7883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2 +20000M
	I0624 03:43:36.039082    7883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:36.039096    7883 main.go:141] libmachine: STDERR: 
	I0624 03:43:36.039106    7883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:36.039120    7883 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:36.039150    7883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:20:de:cf:8c:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:36.040848    7883 main.go:141] libmachine: STDOUT: 
	I0624 03:43:36.040862    7883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:36.040883    7883 client.go:171] duration metric: took 303.5255ms to LocalClient.Create
	I0624 03:43:38.043041    7883 start.go:128] duration metric: took 2.331899834s to createHost
	I0624 03:43:38.043100    7883 start.go:83] releasing machines lock for "auto-871000", held for 2.332014458s
	W0624 03:43:38.043213    7883 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:38.054139    7883 out.go:177] * Deleting "auto-871000" in qemu2 ...
	W0624 03:43:38.091037    7883 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:38.091075    7883 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:43.093209    7883 start.go:360] acquireMachinesLock for auto-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:43.093628    7883 start.go:364] duration metric: took 346.708µs to acquireMachinesLock for "auto-871000"
	I0624 03:43:43.093745    7883 start.go:93] Provisioning new machine with config: &{Name:auto-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:43.094155    7883 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:43.111680    7883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:43:43.161023    7883 start.go:159] libmachine.API.Create for "auto-871000" (driver="qemu2")
	I0624 03:43:43.161067    7883 client.go:168] LocalClient.Create starting
	I0624 03:43:43.161160    7883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:43.161223    7883 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:43.161237    7883 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:43.161293    7883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:43.161334    7883 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:43.161358    7883 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:43.161875    7883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:43.314879    7883 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:43.403976    7883 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:43.403984    7883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:43.404196    7883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:43.416935    7883 main.go:141] libmachine: STDOUT: 
	I0624 03:43:43.416953    7883 main.go:141] libmachine: STDERR: 
	I0624 03:43:43.416999    7883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2 +20000M
	I0624 03:43:43.427974    7883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:43.427988    7883 main.go:141] libmachine: STDERR: 
	I0624 03:43:43.428006    7883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:43.428011    7883 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:43.428043    7883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:85:de:b1:f6:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/auto-871000/disk.qcow2
	I0624 03:43:43.429788    7883 main.go:141] libmachine: STDOUT: 
	I0624 03:43:43.429801    7883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:43.429818    7883 client.go:171] duration metric: took 268.748042ms to LocalClient.Create
	I0624 03:43:45.431981    7883 start.go:128] duration metric: took 2.337791917s to createHost
	I0624 03:43:45.432030    7883 start.go:83] releasing machines lock for "auto-871000", held for 2.338400125s
	W0624 03:43:45.432426    7883 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:45.447813    7883 out.go:177] 
	W0624 03:43:45.452087    7883 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:45.452112    7883 out.go:239] * 
	* 
	W0624 03:43:45.454537    7883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:45.462949    7883 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.044149083s)

                                                
                                                
-- stdout --
	* [kindnet-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-871000" primary control-plane node in "kindnet-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:47.641192    7993 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:47.641325    7993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:47.641327    7993 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:47.641330    7993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:47.641459    7993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:47.642535    7993 out.go:298] Setting JSON to false
	I0624 03:43:47.659226    7993 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6197,"bootTime":1719219630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:47.659297    7993 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:47.665320    7993 out.go:177] * [kindnet-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:47.672256    7993 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:47.672291    7993 notify.go:220] Checking for updates...
	I0624 03:43:47.681196    7993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:43:47.685229    7993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:43:47.689215    7993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:47.692246    7993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:43:47.695192    7993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:47.698533    7993 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:47.698609    7993 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:47.698648    7993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:47.702229    7993 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:43:47.709178    7993 start.go:297] selected driver: qemu2
	I0624 03:43:47.709183    7993 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:43:47.709189    7993 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:47.711442    7993 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:43:47.714248    7993 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:43:47.718312    7993 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:43:47.718356    7993 cni.go:84] Creating CNI manager for "kindnet"
	I0624 03:43:47.718362    7993 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 03:43:47.718404    7993 start.go:340] cluster config:
	{Name:kindnet-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:47.722884    7993 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:47.731187    7993 out.go:177] * Starting "kindnet-871000" primary control-plane node in "kindnet-871000" cluster
	I0624 03:43:47.735248    7993 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:47.735264    7993 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:43:47.735281    7993 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:47.735352    7993 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:43:47.735358    7993 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:47.735424    7993 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kindnet-871000/config.json ...
	I0624 03:43:47.735435    7993 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kindnet-871000/config.json: {Name:mk7261bc51117932cac14e58dc3097ab8aa5ffe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:43:47.735759    7993 start.go:360] acquireMachinesLock for kindnet-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:47.735791    7993 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "kindnet-871000"
	I0624 03:43:47.735802    7993 start.go:93] Provisioning new machine with config: &{Name:kindnet-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:47.735889    7993 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:47.739174    7993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:43:47.756497    7993 start.go:159] libmachine.API.Create for "kindnet-871000" (driver="qemu2")
	I0624 03:43:47.756522    7993 client.go:168] LocalClient.Create starting
	I0624 03:43:47.756585    7993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:47.756618    7993 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:47.756633    7993 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:47.756666    7993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:47.756688    7993 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:47.756702    7993 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:47.757106    7993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:47.910956    7993 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:48.237503    7993 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:48.237513    7993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:48.237786    7993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:48.251429    7993 main.go:141] libmachine: STDOUT: 
	I0624 03:43:48.251454    7993 main.go:141] libmachine: STDERR: 
	I0624 03:43:48.251514    7993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2 +20000M
	I0624 03:43:48.262382    7993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:48.262399    7993 main.go:141] libmachine: STDERR: 
	I0624 03:43:48.262418    7993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:48.262423    7993 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:48.262454    7993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:ab:d5:06:12:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:48.264144    7993 main.go:141] libmachine: STDOUT: 
	I0624 03:43:48.264161    7993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:48.264180    7993 client.go:171] duration metric: took 507.656042ms to LocalClient.Create
	I0624 03:43:50.266344    7993 start.go:128] duration metric: took 2.530454s to createHost
	I0624 03:43:50.266408    7993 start.go:83] releasing machines lock for "kindnet-871000", held for 2.530629208s
	W0624 03:43:50.266457    7993 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:50.281740    7993 out.go:177] * Deleting "kindnet-871000" in qemu2 ...
	W0624 03:43:50.312689    7993 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:50.312723    7993 start.go:728] Will try again in 5 seconds ...
	I0624 03:43:55.314940    7993 start.go:360] acquireMachinesLock for kindnet-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:55.315379    7993 start.go:364] duration metric: took 337.458µs to acquireMachinesLock for "kindnet-871000"
	I0624 03:43:55.315496    7993 start.go:93] Provisioning new machine with config: &{Name:kindnet-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:43:55.315758    7993 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:43:55.334533    7993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:43:55.383983    7993 start.go:159] libmachine.API.Create for "kindnet-871000" (driver="qemu2")
	I0624 03:43:55.384027    7993 client.go:168] LocalClient.Create starting
	I0624 03:43:55.384138    7993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:43:55.384195    7993 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:55.384208    7993 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:55.384266    7993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:43:55.384308    7993 main.go:141] libmachine: Decoding PEM data...
	I0624 03:43:55.384318    7993 main.go:141] libmachine: Parsing certificate...
	I0624 03:43:55.385010    7993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:43:55.536704    7993 main.go:141] libmachine: Creating SSH key...
	I0624 03:43:55.580664    7993 main.go:141] libmachine: Creating Disk image...
	I0624 03:43:55.580669    7993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:43:55.580903    7993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:55.593688    7993 main.go:141] libmachine: STDOUT: 
	I0624 03:43:55.593720    7993 main.go:141] libmachine: STDERR: 
	I0624 03:43:55.593778    7993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2 +20000M
	I0624 03:43:55.604558    7993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:43:55.604577    7993 main.go:141] libmachine: STDERR: 
	I0624 03:43:55.604595    7993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:55.604601    7993 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:43:55.604639    7993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:63:4e:4e:00:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kindnet-871000/disk.qcow2
	I0624 03:43:55.606351    7993 main.go:141] libmachine: STDOUT: 
	I0624 03:43:55.606365    7993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:43:55.606376    7993 client.go:171] duration metric: took 222.345292ms to LocalClient.Create
	I0624 03:43:57.608587    7993 start.go:128] duration metric: took 2.292809542s to createHost
	I0624 03:43:57.608686    7993 start.go:83] releasing machines lock for "kindnet-871000", held for 2.293300833s
	W0624 03:43:57.609154    7993 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:43:57.623861    7993 out.go:177] 
	W0624 03:43:57.626995    7993 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:43:57.627048    7993 out.go:239] * 
	* 
	W0624 03:43:57.629645    7993 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:43:57.639876    7993 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.8228305s)

                                                
                                                
-- stdout --
	* [calico-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-871000" primary control-plane node in "calico-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:43:59.968571    8110 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:43:59.968692    8110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:59.968695    8110 out.go:304] Setting ErrFile to fd 2...
	I0624 03:43:59.968698    8110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:59.968827    8110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:43:59.969842    8110 out.go:298] Setting JSON to false
	I0624 03:43:59.985672    8110 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6209,"bootTime":1719219630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:43:59.985751    8110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:59.991933    8110 out.go:177] * [calico-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:43:59.999929    8110 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:59.999984    8110 notify.go:220] Checking for updates...
	I0624 03:44:00.007740    8110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:00.011859    8110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:00.014902    8110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:00.016245    8110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:00.018886    8110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:00.022310    8110 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:00.022374    8110 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:00.022428    8110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:00.026719    8110 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:00.033898    8110 start.go:297] selected driver: qemu2
	I0624 03:44:00.033904    8110 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:00.033910    8110 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:00.036154    8110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:00.038916    8110 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:44:00.042171    8110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:00.042220    8110 cni.go:84] Creating CNI manager for "calico"
	I0624 03:44:00.042225    8110 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0624 03:44:00.042260    8110 start.go:340] cluster config:
	{Name:calico-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:00.046718    8110 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:00.054889    8110 out.go:177] * Starting "calico-871000" primary control-plane node in "calico-871000" cluster
	I0624 03:44:00.058850    8110 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:00.058865    8110 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:00.058874    8110 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:00.058937    8110 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:00.058943    8110 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:00.059008    8110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/calico-871000/config.json ...
	I0624 03:44:00.059021    8110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/calico-871000/config.json: {Name:mkbb4d168412c96512af857b661b3f3ebfb2dffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:00.059357    8110 start.go:360] acquireMachinesLock for calico-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:00.059394    8110 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "calico-871000"
	I0624 03:44:00.059405    8110 start.go:93] Provisioning new machine with config: &{Name:calico-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:00.059435    8110 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:00.066913    8110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:00.084667    8110 start.go:159] libmachine.API.Create for "calico-871000" (driver="qemu2")
	I0624 03:44:00.084695    8110 client.go:168] LocalClient.Create starting
	I0624 03:44:00.084757    8110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:00.084792    8110 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:00.084805    8110 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:00.084843    8110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:00.084867    8110 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:00.084878    8110 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:00.085313    8110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:00.226783    8110 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:00.324862    8110 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:00.324873    8110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:00.325084    8110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:00.337779    8110 main.go:141] libmachine: STDOUT: 
	I0624 03:44:00.337796    8110 main.go:141] libmachine: STDERR: 
	I0624 03:44:00.337845    8110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2 +20000M
	I0624 03:44:00.348670    8110 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:00.348685    8110 main.go:141] libmachine: STDERR: 
	I0624 03:44:00.348706    8110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:00.348719    8110 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:00.348747    8110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a5:cf:e3:db:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:00.350463    8110 main.go:141] libmachine: STDOUT: 
	I0624 03:44:00.350476    8110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:00.350497    8110 client.go:171] duration metric: took 265.796708ms to LocalClient.Create
	I0624 03:44:02.352653    8110 start.go:128] duration metric: took 2.293216125s to createHost
	I0624 03:44:02.352719    8110 start.go:83] releasing machines lock for "calico-871000", held for 2.293335291s
	W0624 03:44:02.352766    8110 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:02.367973    8110 out.go:177] * Deleting "calico-871000" in qemu2 ...
	W0624 03:44:02.396112    8110 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:02.396132    8110 start.go:728] Will try again in 5 seconds ...
	I0624 03:44:07.398262    8110 start.go:360] acquireMachinesLock for calico-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:07.398814    8110 start.go:364] duration metric: took 440.958µs to acquireMachinesLock for "calico-871000"
	I0624 03:44:07.398977    8110 start.go:93] Provisioning new machine with config: &{Name:calico-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:07.399291    8110 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:07.410810    8110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:07.460135    8110 start.go:159] libmachine.API.Create for "calico-871000" (driver="qemu2")
	I0624 03:44:07.460193    8110 client.go:168] LocalClient.Create starting
	I0624 03:44:07.460312    8110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:07.460378    8110 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:07.460393    8110 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:07.460450    8110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:07.460495    8110 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:07.460510    8110 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:07.461163    8110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:07.613391    8110 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:07.687494    8110 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:07.687500    8110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:07.687706    8110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:07.700450    8110 main.go:141] libmachine: STDOUT: 
	I0624 03:44:07.700470    8110 main.go:141] libmachine: STDERR: 
	I0624 03:44:07.700531    8110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2 +20000M
	I0624 03:44:07.711295    8110 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:07.711320    8110 main.go:141] libmachine: STDERR: 
	I0624 03:44:07.711333    8110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:07.711338    8110 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:07.711386    8110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2e:cd:f7:c1:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/calico-871000/disk.qcow2
	I0624 03:44:07.713105    8110 main.go:141] libmachine: STDOUT: 
	I0624 03:44:07.713119    8110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:07.713131    8110 client.go:171] duration metric: took 252.934125ms to LocalClient.Create
	I0624 03:44:09.715290    8110 start.go:128] duration metric: took 2.315970625s to createHost
	I0624 03:44:09.715354    8110 start.go:83] releasing machines lock for "calico-871000", held for 2.316518458s
	W0624 03:44:09.715738    8110 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:09.725285    8110 out.go:177] 
	W0624 03:44:09.733448    8110 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:44:09.733482    8110 out.go:239] * 
	* 
	W0624 03:44:09.735780    8110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:44:09.748241    8110 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.877914292s)

                                                
                                                
-- stdout --
	* [custom-flannel-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-871000" primary control-plane node in "custom-flannel-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:44:12.193071    8229 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:44:12.193203    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:12.193206    8229 out.go:304] Setting ErrFile to fd 2...
	I0624 03:44:12.193209    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:12.193350    8229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:44:12.194426    8229 out.go:298] Setting JSON to false
	I0624 03:44:12.210360    8229 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6222,"bootTime":1719219630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:44:12.210420    8229 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:44:12.217491    8229 out.go:177] * [custom-flannel-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:44:12.225519    8229 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:44:12.225577    8229 notify.go:220] Checking for updates...
	I0624 03:44:12.233427    8229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:12.237285    8229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:12.240453    8229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:12.244435    8229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:12.245828    8229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:12.249736    8229 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:12.249813    8229 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:12.249864    8229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:12.253439    8229 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:12.258351    8229 start.go:297] selected driver: qemu2
	I0624 03:44:12.258357    8229 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:12.258365    8229 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:12.260605    8229 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:12.264440    8229 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:44:12.265905    8229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:12.265939    8229 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0624 03:44:12.265947    8229 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0624 03:44:12.265974    8229 start.go:340] cluster config:
	{Name:custom-flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:12.270533    8229 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:12.278410    8229 out.go:177] * Starting "custom-flannel-871000" primary control-plane node in "custom-flannel-871000" cluster
	I0624 03:44:12.282408    8229 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:12.282443    8229 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:12.282450    8229 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:12.282515    8229 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:12.282520    8229 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:12.282592    8229 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/custom-flannel-871000/config.json ...
	I0624 03:44:12.282603    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/custom-flannel-871000/config.json: {Name:mkbce58fcd210058c8a3a12c6baac2e0d358a446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:12.282814    8229 start.go:360] acquireMachinesLock for custom-flannel-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:12.282848    8229 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "custom-flannel-871000"
	I0624 03:44:12.282859    8229 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:12.282902    8229 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:12.291411    8229 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:12.308947    8229 start.go:159] libmachine.API.Create for "custom-flannel-871000" (driver="qemu2")
	I0624 03:44:12.308971    8229 client.go:168] LocalClient.Create starting
	I0624 03:44:12.309029    8229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:12.309057    8229 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:12.309067    8229 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:12.309112    8229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:12.309134    8229 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:12.309145    8229 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:12.309481    8229 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:12.451402    8229 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:12.485611    8229 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:12.485616    8229 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:12.485809    8229 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:12.498168    8229 main.go:141] libmachine: STDOUT: 
	I0624 03:44:12.498190    8229 main.go:141] libmachine: STDERR: 
	I0624 03:44:12.498251    8229 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2 +20000M
	I0624 03:44:12.509078    8229 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:12.509092    8229 main.go:141] libmachine: STDERR: 
	I0624 03:44:12.509107    8229 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:12.509110    8229 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:12.509137    8229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:d4:f7:b1:96:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:12.510748    8229 main.go:141] libmachine: STDOUT: 
	I0624 03:44:12.510839    8229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:12.510857    8229 client.go:171] duration metric: took 201.882708ms to LocalClient.Create
	I0624 03:44:14.513014    8229 start.go:128] duration metric: took 2.230107375s to createHost
	I0624 03:44:14.513160    8229 start.go:83] releasing machines lock for "custom-flannel-871000", held for 2.230250791s
	W0624 03:44:14.513213    8229 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:14.520577    8229 out.go:177] * Deleting "custom-flannel-871000" in qemu2 ...
	W0624 03:44:14.548732    8229 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:14.548762    8229 start.go:728] Will try again in 5 seconds ...
	I0624 03:44:19.550906    8229 start.go:360] acquireMachinesLock for custom-flannel-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:19.551491    8229 start.go:364] duration metric: took 386.458µs to acquireMachinesLock for "custom-flannel-871000"
	I0624 03:44:19.551613    8229 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:19.551927    8229 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:19.570739    8229 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:19.619966    8229 start.go:159] libmachine.API.Create for "custom-flannel-871000" (driver="qemu2")
	I0624 03:44:19.620014    8229 client.go:168] LocalClient.Create starting
	I0624 03:44:19.620130    8229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:19.620192    8229 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:19.620209    8229 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:19.620265    8229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:19.620308    8229 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:19.620322    8229 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:19.620837    8229 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:19.771636    8229 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:19.967475    8229 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:19.967481    8229 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:19.967705    8229 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:19.980682    8229 main.go:141] libmachine: STDOUT: 
	I0624 03:44:19.980702    8229 main.go:141] libmachine: STDERR: 
	I0624 03:44:19.980753    8229 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2 +20000M
	I0624 03:44:19.991588    8229 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:19.991603    8229 main.go:141] libmachine: STDERR: 
	I0624 03:44:19.991615    8229 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:19.991620    8229 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:19.991663    8229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a0:e7:2e:8a:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/custom-flannel-871000/disk.qcow2
	I0624 03:44:19.993341    8229 main.go:141] libmachine: STDOUT: 
	I0624 03:44:19.993355    8229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:19.993368    8229 client.go:171] duration metric: took 373.352666ms to LocalClient.Create
	I0624 03:44:21.995523    8229 start.go:128] duration metric: took 2.443587458s to createHost
	I0624 03:44:21.995594    8229 start.go:83] releasing machines lock for "custom-flannel-871000", held for 2.444091166s
	W0624 03:44:21.996017    8229 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:22.008767    8229 out.go:177] 
	W0624 03:44:22.013856    8229 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:44:22.013925    8229 out.go:239] * 
	* 
	W0624 03:44:22.016653    8229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:44:22.028676    8229 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.931640416s)

                                                
                                                
-- stdout --
	* [false-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-871000" primary control-plane node in "false-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:44:24.458462    8347 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:44:24.458620    8347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:24.458623    8347 out.go:304] Setting ErrFile to fd 2...
	I0624 03:44:24.458626    8347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:24.458751    8347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:44:24.459798    8347 out.go:298] Setting JSON to false
	I0624 03:44:24.475975    8347 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6234,"bootTime":1719219630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:44:24.476040    8347 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:44:24.482368    8347 out.go:177] * [false-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:44:24.490468    8347 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:44:24.490493    8347 notify.go:220] Checking for updates...
	I0624 03:44:24.497356    8347 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:24.498635    8347 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:24.501385    8347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:24.504369    8347 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:24.507461    8347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:24.510713    8347 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:24.510787    8347 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:24.510848    8347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:24.515379    8347 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:24.522321    8347 start.go:297] selected driver: qemu2
	I0624 03:44:24.522329    8347 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:24.522337    8347 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:24.524769    8347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:24.527339    8347 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:44:24.531481    8347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:24.531511    8347 cni.go:84] Creating CNI manager for "false"
	I0624 03:44:24.531539    8347 start.go:340] cluster config:
	{Name:false-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:24.536028    8347 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:24.543373    8347 out.go:177] * Starting "false-871000" primary control-plane node in "false-871000" cluster
	I0624 03:44:24.547353    8347 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:24.547372    8347 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:24.547382    8347 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:24.547460    8347 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:24.547466    8347 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:24.547537    8347 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/false-871000/config.json ...
	I0624 03:44:24.547555    8347 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/false-871000/config.json: {Name:mkf65a735b841dcbc12141cecd79088d7f0d9e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:24.547901    8347 start.go:360] acquireMachinesLock for false-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:24.547937    8347 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "false-871000"
	I0624 03:44:24.547948    8347 start.go:93] Provisioning new machine with config: &{Name:false-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:24.547990    8347 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:24.555337    8347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:24.573508    8347 start.go:159] libmachine.API.Create for "false-871000" (driver="qemu2")
	I0624 03:44:24.573538    8347 client.go:168] LocalClient.Create starting
	I0624 03:44:24.573612    8347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:24.573644    8347 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:24.573655    8347 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:24.573697    8347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:24.573720    8347 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:24.573736    8347 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:24.574215    8347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:24.715366    8347 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:24.793620    8347 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:24.793626    8347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:24.793831    8347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:24.806162    8347 main.go:141] libmachine: STDOUT: 
	I0624 03:44:24.806196    8347 main.go:141] libmachine: STDERR: 
	I0624 03:44:24.806249    8347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2 +20000M
	I0624 03:44:24.817129    8347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:24.817152    8347 main.go:141] libmachine: STDERR: 
	I0624 03:44:24.817165    8347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:24.817171    8347 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:24.817205    8347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:66:3c:cc:29:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:24.818851    8347 main.go:141] libmachine: STDOUT: 
	I0624 03:44:24.818865    8347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:24.818883    8347 client.go:171] duration metric: took 245.340625ms to LocalClient.Create
	I0624 03:44:26.821041    8347 start.go:128] duration metric: took 2.273049333s to createHost
	I0624 03:44:26.821080    8347 start.go:83] releasing machines lock for "false-871000", held for 2.273153083s
	W0624 03:44:26.821150    8347 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:26.833579    8347 out.go:177] * Deleting "false-871000" in qemu2 ...
	W0624 03:44:26.863310    8347 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:26.863333    8347 start.go:728] Will try again in 5 seconds ...
	I0624 03:44:31.865468    8347 start.go:360] acquireMachinesLock for false-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:31.866051    8347 start.go:364] duration metric: took 375.542µs to acquireMachinesLock for "false-871000"
	I0624 03:44:31.866186    8347 start.go:93] Provisioning new machine with config: &{Name:false-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:31.866483    8347 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:31.872098    8347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:31.921346    8347 start.go:159] libmachine.API.Create for "false-871000" (driver="qemu2")
	I0624 03:44:31.921400    8347 client.go:168] LocalClient.Create starting
	I0624 03:44:31.921525    8347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:31.921592    8347 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:31.921607    8347 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:31.921670    8347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:31.921715    8347 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:31.921729    8347 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:31.922558    8347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:32.083013    8347 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:32.287462    8347 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:32.287469    8347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:32.287714    8347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:32.300805    8347 main.go:141] libmachine: STDOUT: 
	I0624 03:44:32.300825    8347 main.go:141] libmachine: STDERR: 
	I0624 03:44:32.300887    8347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2 +20000M
	I0624 03:44:32.311758    8347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:32.311785    8347 main.go:141] libmachine: STDERR: 
	I0624 03:44:32.311797    8347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:32.311801    8347 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:32.311837    8347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fd:0c:d8:ad:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/false-871000/disk.qcow2
	I0624 03:44:32.313501    8347 main.go:141] libmachine: STDOUT: 
	I0624 03:44:32.313514    8347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:32.313526    8347 client.go:171] duration metric: took 392.125333ms to LocalClient.Create
	I0624 03:44:34.315679    8347 start.go:128] duration metric: took 2.449181625s to createHost
	I0624 03:44:34.315730    8347 start.go:83] releasing machines lock for "false-871000", held for 2.449677333s
	W0624 03:44:34.316061    8347 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:34.329795    8347 out.go:177] 
	W0624 03:44:34.333889    8347 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:44:34.333945    8347 out.go:239] * 
	* 
	W0624 03:44:34.336323    8347 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:44:34.346631    8347 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.826911542s)

                                                
                                                
-- stdout --
	* [enable-default-cni-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-871000" primary control-plane node in "enable-default-cni-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:44:36.574320    8457 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:44:36.574461    8457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:36.574464    8457 out.go:304] Setting ErrFile to fd 2...
	I0624 03:44:36.574469    8457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:36.574590    8457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:44:36.575670    8457 out.go:298] Setting JSON to false
	I0624 03:44:36.591810    8457 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6246,"bootTime":1719219630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:44:36.591884    8457 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:44:36.598779    8457 out.go:177] * [enable-default-cni-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:44:36.605702    8457 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:44:36.605744    8457 notify.go:220] Checking for updates...
	I0624 03:44:36.614743    8457 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:36.617750    8457 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:36.620782    8457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:36.623789    8457 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:36.626783    8457 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:36.630108    8457 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:36.630179    8457 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:36.630237    8457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:36.632765    8457 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:36.639744    8457 start.go:297] selected driver: qemu2
	I0624 03:44:36.639754    8457 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:36.639762    8457 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:36.642045    8457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:36.645780    8457 out.go:177] * Automatically selected the socket_vmnet network
	E0624 03:44:36.648711    8457 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0624 03:44:36.648724    8457 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:36.648739    8457 cni.go:84] Creating CNI manager for "bridge"
	I0624 03:44:36.648743    8457 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:44:36.648773    8457 start.go:340] cluster config:
	{Name:enable-default-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:36.653230    8457 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:36.661612    8457 out.go:177] * Starting "enable-default-cni-871000" primary control-plane node in "enable-default-cni-871000" cluster
	I0624 03:44:36.665767    8457 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:36.665783    8457 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:36.665790    8457 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:36.665853    8457 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:36.665859    8457 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:36.665919    8457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/enable-default-cni-871000/config.json ...
	I0624 03:44:36.665930    8457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/enable-default-cni-871000/config.json: {Name:mk1a3ce325bba089f1cbbcfb2c2783c7a333d63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:36.666134    8457 start.go:360] acquireMachinesLock for enable-default-cni-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:36.666172    8457 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "enable-default-cni-871000"
	I0624 03:44:36.666183    8457 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:36.666210    8457 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:36.673728    8457 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:36.691107    8457 start.go:159] libmachine.API.Create for "enable-default-cni-871000" (driver="qemu2")
	I0624 03:44:36.691129    8457 client.go:168] LocalClient.Create starting
	I0624 03:44:36.691190    8457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:36.691220    8457 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:36.691228    8457 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:36.691264    8457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:36.691289    8457 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:36.691297    8457 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:36.691716    8457 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:36.834175    8457 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:36.880623    8457 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:36.880629    8457 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:36.880821    8457 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:36.893138    8457 main.go:141] libmachine: STDOUT: 
	I0624 03:44:36.893158    8457 main.go:141] libmachine: STDERR: 
	I0624 03:44:36.893212    8457 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2 +20000M
	I0624 03:44:36.903890    8457 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:36.903906    8457 main.go:141] libmachine: STDERR: 
	I0624 03:44:36.903921    8457 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:36.903939    8457 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:36.903973    8457 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8d:c3:53:44:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:36.905628    8457 main.go:141] libmachine: STDOUT: 
	I0624 03:44:36.905644    8457 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:36.905664    8457 client.go:171] duration metric: took 214.529917ms to LocalClient.Create
	I0624 03:44:38.907823    8457 start.go:128] duration metric: took 2.241610458s to createHost
	I0624 03:44:38.907904    8457 start.go:83] releasing machines lock for "enable-default-cni-871000", held for 2.241741959s
	W0624 03:44:38.907949    8457 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:38.916265    8457 out.go:177] * Deleting "enable-default-cni-871000" in qemu2 ...
	W0624 03:44:38.950687    8457 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:38.950726    8457 start.go:728] Will try again in 5 seconds ...
	I0624 03:44:43.952926    8457 start.go:360] acquireMachinesLock for enable-default-cni-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:43.953410    8457 start.go:364] duration metric: took 354.291µs to acquireMachinesLock for "enable-default-cni-871000"
	I0624 03:44:43.953531    8457 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:43.953802    8457 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:43.970238    8457 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:44.019608    8457 start.go:159] libmachine.API.Create for "enable-default-cni-871000" (driver="qemu2")
	I0624 03:44:44.019659    8457 client.go:168] LocalClient.Create starting
	I0624 03:44:44.019776    8457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:44.019835    8457 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:44.019852    8457 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:44.019914    8457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:44.019961    8457 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:44.019975    8457 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:44.020634    8457 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:44.172413    8457 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:44.297561    8457 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:44.297566    8457 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:44.297779    8457 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:44.310443    8457 main.go:141] libmachine: STDOUT: 
	I0624 03:44:44.310466    8457 main.go:141] libmachine: STDERR: 
	I0624 03:44:44.310528    8457 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2 +20000M
	I0624 03:44:44.321258    8457 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:44.321275    8457 main.go:141] libmachine: STDERR: 
	I0624 03:44:44.321288    8457 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:44.321305    8457 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:44.321341    8457 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:03:09:cf:22:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/enable-default-cni-871000/disk.qcow2
	I0624 03:44:44.322896    8457 main.go:141] libmachine: STDOUT: 
	I0624 03:44:44.322911    8457 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:44.322924    8457 client.go:171] duration metric: took 303.260875ms to LocalClient.Create
	I0624 03:44:46.325084    8457 start.go:128] duration metric: took 2.371276125s to createHost
	I0624 03:44:46.325162    8457 start.go:83] releasing machines lock for "enable-default-cni-871000", held for 2.371741459s
	W0624 03:44:46.325619    8457 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:46.340301    8457 out.go:177] 
	W0624 03:44:46.344373    8457 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:44:46.344403    8457 out.go:239] * 
	* 
	W0624 03:44:46.347110    8457 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:44:46.361215    8457 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.781784833s)

                                                
                                                
-- stdout --
	* [flannel-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-871000" primary control-plane node in "flannel-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:44:48.602241    8573 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:44:48.602372    8573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:48.602376    8573 out.go:304] Setting ErrFile to fd 2...
	I0624 03:44:48.602378    8573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:48.602500    8573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:44:48.603493    8573 out.go:298] Setting JSON to false
	I0624 03:44:48.619572    8573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6258,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:44:48.619640    8573 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:44:48.626010    8573 out.go:177] * [flannel-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:44:48.632939    8573 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:44:48.632994    8573 notify.go:220] Checking for updates...
	I0624 03:44:48.640968    8573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:48.643993    8573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:48.647014    8573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:48.650024    8573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:48.653025    8573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:48.656444    8573 config.go:182] Loaded profile config "cert-expiration-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:48.656512    8573 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:48.656568    8573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:48.661048    8573 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:48.667975    8573 start.go:297] selected driver: qemu2
	I0624 03:44:48.667982    8573 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:48.667989    8573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:48.670297    8573 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:48.673040    8573 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:44:48.675955    8573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:48.675992    8573 cni.go:84] Creating CNI manager for "flannel"
	I0624 03:44:48.675998    8573 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0624 03:44:48.676042    8573 start.go:340] cluster config:
	{Name:flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:48.680547    8573 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:48.689031    8573 out.go:177] * Starting "flannel-871000" primary control-plane node in "flannel-871000" cluster
	I0624 03:44:48.693021    8573 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:48.693036    8573 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:48.693043    8573 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:48.693101    8573 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:48.693109    8573 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:48.693176    8573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/flannel-871000/config.json ...
	I0624 03:44:48.693188    8573 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/flannel-871000/config.json: {Name:mkfe54ab18cb6c534195b51d418b95dcb8ef9856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:48.693407    8573 start.go:360] acquireMachinesLock for flannel-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:48.693445    8573 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "flannel-871000"
	I0624 03:44:48.693456    8573 start.go:93] Provisioning new machine with config: &{Name:flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:48.693488    8573 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:48.699952    8573 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:48.717631    8573 start.go:159] libmachine.API.Create for "flannel-871000" (driver="qemu2")
	I0624 03:44:48.717658    8573 client.go:168] LocalClient.Create starting
	I0624 03:44:48.717718    8573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:48.717748    8573 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:48.717761    8573 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:48.717805    8573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:48.717829    8573 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:48.717838    8573 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:48.718200    8573 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:48.860584    8573 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:48.938428    8573 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:48.938433    8573 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:48.938652    8573 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:48.950929    8573 main.go:141] libmachine: STDOUT: 
	I0624 03:44:48.950947    8573 main.go:141] libmachine: STDERR: 
	I0624 03:44:48.951004    8573 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2 +20000M
	I0624 03:44:48.962011    8573 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:48.962033    8573 main.go:141] libmachine: STDERR: 
	I0624 03:44:48.962049    8573 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:48.962053    8573 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:48.962087    8573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:50:b4:f2:02:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:48.963830    8573 main.go:141] libmachine: STDOUT: 
	I0624 03:44:48.963844    8573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:48.963861    8573 client.go:171] duration metric: took 246.199083ms to LocalClient.Create
	I0624 03:44:50.966017    8573 start.go:128] duration metric: took 2.272528291s to createHost
	I0624 03:44:50.966066    8573 start.go:83] releasing machines lock for "flannel-871000", held for 2.2726315s
	W0624 03:44:50.966144    8573 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:50.982386    8573 out.go:177] * Deleting "flannel-871000" in qemu2 ...
	W0624 03:44:51.011283    8573 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:51.011309    8573 start.go:728] Will try again in 5 seconds ...
	I0624 03:44:56.013514    8573 start.go:360] acquireMachinesLock for flannel-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:56.013943    8573 start.go:364] duration metric: took 329.042µs to acquireMachinesLock for "flannel-871000"
	I0624 03:44:56.014067    8573 start.go:93] Provisioning new machine with config: &{Name:flannel-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:56.014356    8573 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:56.018874    8573 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:56.067680    8573 start.go:159] libmachine.API.Create for "flannel-871000" (driver="qemu2")
	I0624 03:44:56.067733    8573 client.go:168] LocalClient.Create starting
	I0624 03:44:56.067838    8573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:56.067897    8573 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:56.067911    8573 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:56.067969    8573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:56.068012    8573 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:56.068029    8573 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:56.068655    8573 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:56.221019    8573 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:56.260005    8573 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:56.260010    8573 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:56.260223    8573 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:56.272643    8573 main.go:141] libmachine: STDOUT: 
	I0624 03:44:56.272671    8573 main.go:141] libmachine: STDERR: 
	I0624 03:44:56.272713    8573 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2 +20000M
	I0624 03:44:56.283686    8573 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:56.283713    8573 main.go:141] libmachine: STDERR: 
	I0624 03:44:56.283734    8573 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:56.283738    8573 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:56.283769    8573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:3f:12:49:68:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/flannel-871000/disk.qcow2
	I0624 03:44:56.285539    8573 main.go:141] libmachine: STDOUT: 
	I0624 03:44:56.285564    8573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:56.285579    8573 client.go:171] duration metric: took 217.841209ms to LocalClient.Create
	I0624 03:44:58.287740    8573 start.go:128] duration metric: took 2.273375542s to createHost
	I0624 03:44:58.287799    8573 start.go:83] releasing machines lock for "flannel-871000", held for 2.273852541s
	W0624 03:44:58.288155    8573 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:44:58.306661    8573 out.go:177] 
	W0624 03:44:58.314763    8573 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:44:58.314806    8573 out.go:239] * 
	* 
	W0624 03:44:58.317574    8573 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:44:58.325693    8573 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.939010542s)

                                                
                                                
-- stdout --
	* [bridge-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-871000" primary control-plane node in "bridge-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:44:58.571480    8606 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:44:58.571617    8606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:58.571621    8606 out.go:304] Setting ErrFile to fd 2...
	I0624 03:44:58.571623    8606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:44:58.571764    8606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:44:58.572983    8606 out.go:298] Setting JSON to false
	I0624 03:44:58.590730    8606 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6268,"bootTime":1719219630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:44:58.590809    8606 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:44:58.595674    8606 out.go:177] * [bridge-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:44:58.603684    8606 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:44:58.603684    8606 notify.go:220] Checking for updates...
	I0624 03:44:58.611465    8606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:44:58.614561    8606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:44:58.617596    8606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:44:58.620601    8606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:44:58.623569    8606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:44:58.626934    8606 config.go:182] Loaded profile config "flannel-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:58.627000    8606 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:44:58.627068    8606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:44:58.630533    8606 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:44:58.637602    8606 start.go:297] selected driver: qemu2
	I0624 03:44:58.637613    8606 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:44:58.637620    8606 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:44:58.639908    8606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:44:58.642611    8606 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:44:58.645590    8606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:44:58.645608    8606 cni.go:84] Creating CNI manager for "bridge"
	I0624 03:44:58.645612    8606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:44:58.645649    8606 start.go:340] cluster config:
	{Name:bridge-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:44:58.650321    8606 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:44:58.657456    8606 out.go:177] * Starting "bridge-871000" primary control-plane node in "bridge-871000" cluster
	I0624 03:44:58.661566    8606 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:44:58.661598    8606 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:44:58.661611    8606 cache.go:56] Caching tarball of preloaded images
	I0624 03:44:58.661685    8606 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:44:58.661690    8606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:44:58.661746    8606 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/bridge-871000/config.json ...
	I0624 03:44:58.661757    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/bridge-871000/config.json: {Name:mk6b44efd24133c29bc1dad6299e2d268a79d768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:44:58.661983    8606 start.go:360] acquireMachinesLock for bridge-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:44:58.662013    8606 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "bridge-871000"
	I0624 03:44:58.662022    8606 start.go:93] Provisioning new machine with config: &{Name:bridge-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:44:58.662059    8606 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:44:58.666515    8606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:44:58.682184    8606 start.go:159] libmachine.API.Create for "bridge-871000" (driver="qemu2")
	I0624 03:44:58.682217    8606 client.go:168] LocalClient.Create starting
	I0624 03:44:58.682276    8606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:44:58.682318    8606 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:58.682329    8606 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:58.682374    8606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:44:58.682400    8606 main.go:141] libmachine: Decoding PEM data...
	I0624 03:44:58.682409    8606 main.go:141] libmachine: Parsing certificate...
	I0624 03:44:58.682837    8606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:44:58.908705    8606 main.go:141] libmachine: Creating SSH key...
	I0624 03:44:58.993991    8606 main.go:141] libmachine: Creating Disk image...
	I0624 03:44:58.994001    8606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:44:58.994193    8606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:44:59.007164    8606 main.go:141] libmachine: STDOUT: 
	I0624 03:44:59.007187    8606 main.go:141] libmachine: STDERR: 
	I0624 03:44:59.007246    8606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2 +20000M
	I0624 03:44:59.019565    8606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:44:59.019583    8606 main.go:141] libmachine: STDERR: 
	I0624 03:44:59.019610    8606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:44:59.019616    8606 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:44:59.019649    8606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:d8:39:36:a2:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:44:59.021755    8606 main.go:141] libmachine: STDOUT: 
	I0624 03:44:59.021772    8606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:44:59.021800    8606 client.go:171] duration metric: took 339.579958ms to LocalClient.Create
	I0624 03:45:01.023971    8606 start.go:128] duration metric: took 2.36191225s to createHost
	I0624 03:45:01.024026    8606 start.go:83] releasing machines lock for "bridge-871000", held for 2.362024458s
	W0624 03:45:01.024078    8606 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:01.039718    8606 out.go:177] * Deleting "bridge-871000" in qemu2 ...
	W0624 03:45:01.062704    8606 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:01.062729    8606 start.go:728] Will try again in 5 seconds ...
	I0624 03:45:06.065020    8606 start.go:360] acquireMachinesLock for bridge-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:45:06.065474    8606 start.go:364] duration metric: took 346.084µs to acquireMachinesLock for "bridge-871000"
	I0624 03:45:06.065611    8606 start.go:93] Provisioning new machine with config: &{Name:bridge-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:45:06.065967    8606 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:45:06.074515    8606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:45:06.122735    8606 start.go:159] libmachine.API.Create for "bridge-871000" (driver="qemu2")
	I0624 03:45:06.122786    8606 client.go:168] LocalClient.Create starting
	I0624 03:45:06.122895    8606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:45:06.122955    8606 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:06.122971    8606 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:06.123031    8606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:45:06.123074    8606 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:06.123083    8606 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:06.123642    8606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:45:06.291557    8606 main.go:141] libmachine: Creating SSH key...
	I0624 03:45:06.412469    8606 main.go:141] libmachine: Creating Disk image...
	I0624 03:45:06.412475    8606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:45:06.412694    8606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:45:06.425338    8606 main.go:141] libmachine: STDOUT: 
	I0624 03:45:06.425374    8606 main.go:141] libmachine: STDERR: 
	I0624 03:45:06.425425    8606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2 +20000M
	I0624 03:45:06.436319    8606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:45:06.436336    8606 main.go:141] libmachine: STDERR: 
	I0624 03:45:06.436347    8606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:45:06.436354    8606 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:45:06.436388    8606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:0a:05:dc:71:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/bridge-871000/disk.qcow2
	I0624 03:45:06.437985    8606 main.go:141] libmachine: STDOUT: 
	I0624 03:45:06.438002    8606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:45:06.438014    8606 client.go:171] duration metric: took 315.226833ms to LocalClient.Create
	I0624 03:45:08.440160    8606 start.go:128] duration metric: took 2.374158584s to createHost
	I0624 03:45:08.440193    8606 start.go:83] releasing machines lock for "bridge-871000", held for 2.374717583s
	W0624 03:45:08.440511    8606 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:08.456144    8606 out.go:177] 
	W0624 03:45:08.460205    8606 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:45:08.460249    8606 out.go:239] * 
	* 
	W0624 03:45:08.462199    8606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:08.474064    8606 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-871000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.091716375s)

                                                
                                                
-- stdout --
	* [kubenet-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-871000" primary control-plane node in "kubenet-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:45:00.794937    8715 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:45:00.795074    8715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:45:00.795077    8715 out.go:304] Setting ErrFile to fd 2...
	I0624 03:45:00.795079    8715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:45:00.795238    8715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:45:00.796299    8715 out.go:298] Setting JSON to false
	I0624 03:45:00.812291    8715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6270,"bootTime":1719219630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:45:00.812357    8715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:45:00.818754    8715 out.go:177] * [kubenet-871000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:45:00.826742    8715 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:45:00.826811    8715 notify.go:220] Checking for updates...
	I0624 03:45:00.834652    8715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:45:00.837678    8715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:45:00.840712    8715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:45:00.843616    8715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:45:00.846659    8715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:45:00.849960    8715 config.go:182] Loaded profile config "bridge-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:45:00.850029    8715 config.go:182] Loaded profile config "multinode-913000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:45:00.850078    8715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:45:00.854575    8715 out.go:177] * Using the qemu2 driver based on user configuration
	I0624 03:45:00.861664    8715 start.go:297] selected driver: qemu2
	I0624 03:45:00.861671    8715 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:45:00.861676    8715 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:45:00.863912    8715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:45:00.867662    8715 out.go:177] * Automatically selected the socket_vmnet network
	I0624 03:45:00.871754    8715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:45:00.871802    8715 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0624 03:45:00.871844    8715 start.go:340] cluster config:
	{Name:kubenet-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:45:00.876492    8715 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:45:00.883635    8715 out.go:177] * Starting "kubenet-871000" primary control-plane node in "kubenet-871000" cluster
	I0624 03:45:00.887662    8715 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:45:00.887678    8715 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:45:00.887686    8715 cache.go:56] Caching tarball of preloaded images
	I0624 03:45:00.887789    8715 preload.go:173] Found /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0624 03:45:00.887801    8715 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:45:00.887868    8715 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kubenet-871000/config.json ...
	I0624 03:45:00.887881    8715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/kubenet-871000/config.json: {Name:mk0eb0903c1e4156cdd16f4a89ed8819266969f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:45:00.888107    8715 start.go:360] acquireMachinesLock for kubenet-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:45:01.024098    8715 start.go:364] duration metric: took 135.973167ms to acquireMachinesLock for "kubenet-871000"
	I0624 03:45:01.024197    8715 start.go:93] Provisioning new machine with config: &{Name:kubenet-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:45:01.024360    8715 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:45:01.032826    8715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:45:01.080874    8715 start.go:159] libmachine.API.Create for "kubenet-871000" (driver="qemu2")
	I0624 03:45:01.080928    8715 client.go:168] LocalClient.Create starting
	I0624 03:45:01.081044    8715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:45:01.081101    8715 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:01.081125    8715 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:01.081194    8715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:45:01.081240    8715 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:01.081259    8715 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:01.081927    8715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:45:01.232558    8715 main.go:141] libmachine: Creating SSH key...
	I0624 03:45:01.342095    8715 main.go:141] libmachine: Creating Disk image...
	I0624 03:45:01.342101    8715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:45:01.342309    8715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:01.354889    8715 main.go:141] libmachine: STDOUT: 
	I0624 03:45:01.354904    8715 main.go:141] libmachine: STDERR: 
	I0624 03:45:01.354971    8715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2 +20000M
	I0624 03:45:01.365690    8715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:45:01.365716    8715 main.go:141] libmachine: STDERR: 
	I0624 03:45:01.365731    8715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:01.365734    8715 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:45:01.365764    8715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:a5:08:79:ee:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:01.367444    8715 main.go:141] libmachine: STDOUT: 
	I0624 03:45:01.367458    8715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:45:01.367476    8715 client.go:171] duration metric: took 286.544083ms to LocalClient.Create
	I0624 03:45:03.369671    8715 start.go:128] duration metric: took 2.345303875s to createHost
	I0624 03:45:03.369726    8715 start.go:83] releasing machines lock for "kubenet-871000", held for 2.345592375s
	W0624 03:45:03.369779    8715 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:03.389314    8715 out.go:177] * Deleting "kubenet-871000" in qemu2 ...
	W0624 03:45:03.420312    8715 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:03.420352    8715 start.go:728] Will try again in 5 seconds ...
	I0624 03:45:08.422584    8715 start.go:360] acquireMachinesLock for kubenet-871000: {Name:mkc1fd40482bf4457c3d252d5370749ca9c8e0b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:45:08.440308    8715 start.go:364] duration metric: took 17.58825ms to acquireMachinesLock for "kubenet-871000"
	I0624 03:45:08.440478    8715 start.go:93] Provisioning new machine with config: &{Name:kubenet-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:45:08.440728    8715 start.go:125] createHost starting for "" (driver="qemu2")
	I0624 03:45:08.449126    8715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0624 03:45:08.498048    8715 start.go:159] libmachine.API.Create for "kubenet-871000" (driver="qemu2")
	I0624 03:45:08.498093    8715 client.go:168] LocalClient.Create starting
	I0624 03:45:08.498218    8715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/ca.pem
	I0624 03:45:08.498269    8715 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:08.498283    8715 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:08.498345    8715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19124-4612/.minikube/certs/cert.pem
	I0624 03:45:08.498373    8715 main.go:141] libmachine: Decoding PEM data...
	I0624 03:45:08.498390    8715 main.go:141] libmachine: Parsing certificate...
	I0624 03:45:08.498919    8715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso...
	I0624 03:45:08.662896    8715 main.go:141] libmachine: Creating SSH key...
	I0624 03:45:08.792735    8715 main.go:141] libmachine: Creating Disk image...
	I0624 03:45:08.792744    8715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0624 03:45:08.792965    8715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:08.806503    8715 main.go:141] libmachine: STDOUT: 
	I0624 03:45:08.806529    8715 main.go:141] libmachine: STDERR: 
	I0624 03:45:08.806606    8715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2 +20000M
	I0624 03:45:08.819163    8715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0624 03:45:08.819184    8715 main.go:141] libmachine: STDERR: 
	I0624 03:45:08.819198    8715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:08.819203    8715 main.go:141] libmachine: Starting QEMU VM...
	I0624 03:45:08.819233    8715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:95:a4:ed:a7:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19124-4612/.minikube/machines/kubenet-871000/disk.qcow2
	I0624 03:45:08.821082    8715 main.go:141] libmachine: STDOUT: 
	I0624 03:45:08.821098    8715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0624 03:45:08.821111    8715 client.go:171] duration metric: took 323.016042ms to LocalClient.Create
	I0624 03:45:10.823251    8715 start.go:128] duration metric: took 2.382505834s to createHost
	I0624 03:45:10.823317    8715 start.go:83] releasing machines lock for "kubenet-871000", held for 2.38300525s
	W0624 03:45:10.823718    8715 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0624 03:45:10.831603    8715 out.go:177] 
	W0624 03:45:10.835670    8715 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0624 03:45:10.835736    8715 out.go:239] * 
	* 
	W0624 03:45:10.838215    8715 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:10.847629    8715 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.2/json-events 9.53
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.23
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.3
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 7.1
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
55 TestFunctional/serial/CacheCmd/cache/add_local 1.22
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.21
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.98
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.04
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.03
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.15
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
233 TestNoKubernetes/serial/ProfileList 0.14
234 TestNoKubernetes/serial/Stop 3.57
235 TestStoppedBinaryUpgrade/Setup 1.01
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
239 TestStoppedBinaryUpgrade/MinikubeLogs 0.75
266 TestStartStop/group/old-k8s-version/serial/Stop 3.08
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
277 TestStartStop/group/no-preload/serial/Stop 3.59
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
288 TestStartStop/group/embed-certs/serial/Stop 1.92
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.1
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.05
310 TestStartStop/group/newest-cni/serial/Stop 2.11
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-954000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-954000: exit status 85 (97.864375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |          |
	|         | -p download-only-954000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:18:45
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:18:45.628580    5138 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:18:45.628728    5138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:45.628732    5138 out.go:304] Setting ErrFile to fd 2...
	I0624 03:18:45.628734    5138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:45.628875    5138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	W0624 03:18:45.628953    5138 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19124-4612/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19124-4612/.minikube/config/config.json: no such file or directory
	I0624 03:18:45.630286    5138 out.go:298] Setting JSON to true
	I0624 03:18:45.649071    5138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4695,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:18:45.649148    5138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:18:45.665016    5138 out.go:97] [download-only-954000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:18:45.665128    5138 notify.go:220] Checking for updates...
	W0624 03:18:45.665224    5138 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball: no such file or directory
	I0624 03:18:45.669782    5138 out.go:169] MINIKUBE_LOCATION=19124
	I0624 03:18:45.677251    5138 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:18:45.703456    5138 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:18:45.706446    5138 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:18:45.707560    5138 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	W0624 03:18:45.716383    5138 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0624 03:18:45.716612    5138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:18:45.717828    5138 out.go:97] Using the qemu2 driver based on user configuration
	I0624 03:18:45.717854    5138 start.go:297] selected driver: qemu2
	I0624 03:18:45.717878    5138 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:18:45.717964    5138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:18:45.720347    5138 out.go:169] Automatically selected the socket_vmnet network
	I0624 03:18:45.728049    5138 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0624 03:18:45.728153    5138 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:18:45.728269    5138 cni.go:84] Creating CNI manager for ""
	I0624 03:18:45.728290    5138 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:18:45.728363    5138 start.go:340] cluster config:
	{Name:download-only-954000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:18:45.733858    5138 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:18:45.737309    5138 out.go:97] Downloading VM boot image ...
	I0624 03:18:45.737345    5138 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/iso/arm64/minikube-v1.33.1-1718923868-19112-arm64.iso
	I0624 03:18:50.522381    5138 out.go:97] Starting "download-only-954000" primary control-plane node in "download-only-954000" cluster
	I0624 03:18:50.522401    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:50.573046    5138 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:18:50.573052    5138 cache.go:56] Caching tarball of preloaded images
	I0624 03:18:50.573394    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:50.578376    5138 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0624 03:18:50.578382    5138 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:50.654315    5138 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0624 03:18:56.652059    5138 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:56.652209    5138 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:57.355869    5138 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:18:57.356072    5138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/download-only-954000/config.json ...
	I0624 03:18:57.356089    5138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19124-4612/.minikube/profiles/download-only-954000/config.json: {Name:mkfb538539f791a6e1396e0e1b122bd007f20dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:18:57.356684    5138 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:18:57.356882    5138 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0624 03:18:57.746243    5138 out.go:169] 
	W0624 03:18:57.754604    5138 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19124-4612/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980 0x1071b1980] Decompressors:map[bz2:0x14000809420 gz:0x14000809428 tar:0x140008093d0 tar.bz2:0x140008093e0 tar.gz:0x140008093f0 tar.xz:0x14000809400 tar.zst:0x14000809410 tbz2:0x140008093e0 tgz:0x140008093f0 txz:0x14000809400 tzst:0x14000809410 xz:0x14000809430 zip:0x14000809440 zst:0x14000809438] Getters:map[file:0x14001720570 http:0x14000462550 https:0x140004625a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0624 03:18:57.754654    5138 out_reason.go:110] 
	W0624 03:18:57.761366    5138 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:18:57.764996    5138 out.go:169] 
	
	
	* The control-plane node download-only-954000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-954000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-954000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (9.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-105000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-105000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (9.526561s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (9.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-105000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-105000: exit status 85 (78.955916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
	|         | -p download-only-954000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
	| delete  | -p download-only-954000        | download-only-954000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT | 24 Jun 24 03:18 PDT |
	| start   | -o=json --download-only        | download-only-105000 | jenkins | v1.33.1 | 24 Jun 24 03:18 PDT |                     |
	|         | -p download-only-105000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:18:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:18:58.419740    5174 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:18:58.419883    5174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:58.419886    5174 out.go:304] Setting ErrFile to fd 2...
	I0624 03:18:58.419889    5174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:18:58.420020    5174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:18:58.421052    5174 out.go:298] Setting JSON to true
	I0624 03:18:58.437569    5174 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4708,"bootTime":1719219630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:18:58.437667    5174 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:18:58.441324    5174 out.go:97] [download-only-105000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:18:58.441392    5174 notify.go:220] Checking for updates...
	I0624 03:18:58.446045    5174 out.go:169] MINIKUBE_LOCATION=19124
	I0624 03:18:58.449345    5174 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:18:58.453288    5174 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:18:58.456397    5174 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:18:58.459366    5174 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	W0624 03:18:58.465567    5174 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0624 03:18:58.465725    5174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:18:58.469795    5174 out.go:97] Using the qemu2 driver based on user configuration
	I0624 03:18:58.469805    5174 start.go:297] selected driver: qemu2
	I0624 03:18:58.469809    5174 start.go:901] validating driver "qemu2" against <nil>
	I0624 03:18:58.469870    5174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:18:58.473173    5174 out.go:169] Automatically selected the socket_vmnet network
	I0624 03:18:58.478385    5174 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0624 03:18:58.478473    5174 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:18:58.478515    5174 cni.go:84] Creating CNI manager for ""
	I0624 03:18:58.478523    5174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:18:58.478528    5174 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:18:58.478570    5174 start.go:340] cluster config:
	{Name:download-only-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:18:58.482940    5174 iso.go:125] acquiring lock: {Name:mk986aba993b9a63ae62f6f9265ee0595c4cfc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:18:58.487543    5174 out.go:97] Starting "download-only-105000" primary control-plane node in "download-only-105000" cluster
	I0624 03:18:58.487550    5174 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:18:58.538406    5174 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:18:58.538438    5174 cache.go:56] Caching tarball of preloaded images
	I0624 03:18:58.538848    5174 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:18:58.543314    5174 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0624 03:18:58.543321    5174 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:18:58.614796    5174 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0624 03:19:03.365182    5174 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0624 03:19:03.365496    5174 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19124-4612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-105000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-105000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-105000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-588000 --alsologtostderr --binary-mirror http://127.0.0.1:50905 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-588000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-495000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-495000: exit status 85 (62.724208ms)

                                                
                                                
-- stdout --
	* Profile "addons-495000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-495000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-495000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-495000: exit status 85 (58.77275ms)

                                                
                                                
-- stdout --
	* Profile "addons-495000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-495000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.3s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status: exit status 7 (32.360375ms)

                                                
                                                
-- stdout --
	nospam-659000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status: exit status 7 (29.735334ms)

                                                
                                                
-- stdout --
	nospam-659000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status: exit status 7 (28.8885ms)

                                                
                                                
-- stdout --
	nospam-659000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause: exit status 83 (39.052333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause: exit status 83 (40.021125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause: exit status 83 (40.519167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause: exit status 83 (40.022458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause: exit status 83 (39.870542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause: exit status 83 (39.8ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-659000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (7.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop: (1.952179584s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop: (2.078162042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-659000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-659000 stop: (3.06344275s)
--- PASS: TestErrorSpam/stop (7.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19124-4612/.minikube/files/etc/test/nested/copy/5136/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-880000 cache add registry.k8s.io/pause:3.1: (1.082986042s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-880000 cache add registry.k8s.io/pause:3.3: (1.080733s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local236311564/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache add minikube-local-cache-test:functional-880000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 cache delete minikube-local-cache-test:functional-880000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-880000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 config get cpus: exit status 14 (29.844333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 config get cpus: exit status 14 (34.198625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-880000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (162.195625ms)

                                                
                                                
-- stdout --
	* [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:20:50.349406    5780 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:20:50.349626    5780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.349631    5780 out.go:304] Setting ErrFile to fd 2...
	I0624 03:20:50.349634    5780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.349829    5780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:20:50.351231    5780 out.go:298] Setting JSON to false
	I0624 03:20:50.371628    5780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4820,"bootTime":1719219630,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:20:50.371688    5780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:20:50.376213    5780 out.go:177] * [functional-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0624 03:20:50.384240    5780 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:20:50.384285    5780 notify.go:220] Checking for updates...
	I0624 03:20:50.391171    5780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:20:50.392426    5780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:20:50.395160    5780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:20:50.398179    5780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:20:50.401151    5780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:20:50.404593    5780 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:20:50.404904    5780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:20:50.409141    5780 out.go:177] * Using the qemu2 driver based on existing profile
	I0624 03:20:50.416152    5780 start.go:297] selected driver: qemu2
	I0624 03:20:50.416158    5780 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:20:50.416210    5780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:20:50.423165    5780 out.go:177] 
	W0624 03:20:50.427130    5780 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0624 03:20:50.431201    5780 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-880000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-880000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.75175ms)

                                                
                                                
-- stdout --
	* [functional-880000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0624 03:20:50.583463    5791 out.go:291] Setting OutFile to fd 1 ...
	I0624 03:20:50.583577    5791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.583580    5791 out.go:304] Setting ErrFile to fd 2...
	I0624 03:20:50.583582    5791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:50.583729    5791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19124-4612/.minikube/bin
	I0624 03:20:50.585174    5791 out.go:298] Setting JSON to false
	I0624 03:20:50.601658    5791 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4820,"bootTime":1719219630,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0624 03:20:50.601744    5791 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:20:50.605649    5791 out.go:177] * [functional-880000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0624 03:20:50.613320    5791 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:20:50.613386    5791 notify.go:220] Checking for updates...
	I0624 03:20:50.620134    5791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	I0624 03:20:50.623197    5791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0624 03:20:50.626071    5791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:20:50.629193    5791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	I0624 03:20:50.632190    5791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:20:50.633772    5791 config.go:182] Loaded profile config "functional-880000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:20:50.634042    5791 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:20:50.638102    5791 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0624 03:20:50.645051    5791 start.go:297] selected driver: qemu2
	I0624 03:20:50.645061    5791 start.go:901] validating driver "qemu2" against &{Name:functional-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:20:50.645122    5791 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:20:50.651193    5791 out.go:177] 
	W0624 03:20:50.655205    5791 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0624 03:20:50.659088    5791 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.941887542s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-880000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image rm gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-880000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 image save --daemon gcr.io/google-containers/addon-resizer:functional-880000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-880000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "68.814667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "31.823ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "69.099959ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.50975ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013274334s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-880000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-880000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-880000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-880000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-672000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-672000 --output=json --user=testUser: (2.040051166s)
--- PASS: TestJSONOutput/stop/Command (2.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-587000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-587000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.85875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dea76998-5f4d-4fd1-ac82-918f1351f935","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-587000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e579890-30d8-4dc8-a877-aa1e9e3f0fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19124"}}
	{"specversion":"1.0","id":"410fd06a-cc97-4dec-ad7f-6b49c0e4abde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig"}}
	{"specversion":"1.0","id":"00c48e1d-c9dc-4cbd-9bbd-5056ec6f4ff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"745f6c1f-9f89-4eb3-8ced-4e5645cfc09c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62f3fd07-293a-46a0-a3ff-15d7e6409e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube"}}
	{"specversion":"1.0","id":"77e09db3-e317-49c9-9a18-39102f2ec5c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fe88d5bd-8363-4aad-bead-1fb3792b4768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-587000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-996000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (148.525042ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-996000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19124
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19124-4612/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19124-4612/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-996000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-996000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.107291ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-996000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-996000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-996000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-996000: (3.574322667s)
--- PASS: TestNoKubernetes/serial/Stop (3.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-996000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-996000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.699459ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-996000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-996000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-252000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-703000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-703000 --alsologtostderr -v=3: (3.077974125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-703000 -n old-k8s-version-703000: exit status 7 (69.311166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-703000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-030000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-030000 --alsologtostderr -v=3: (3.587477041s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-030000 -n no-preload-030000: exit status 7 (63.354875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-030000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-589000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-589000 --alsologtostderr -v=3: (1.920633875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-589000 -n embed-certs-589000: exit status 7 (62.831291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-589000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-353000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-353000 --alsologtostderr -v=3: (2.100705958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (65.60325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-353000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-744000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-744000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-744000 --alsologtostderr -v=3: (2.109632041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-744000 -n newest-cni-744000: exit status 7 (66.300417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-744000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2912672818/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719224413660584000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2912672818/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719224413660584000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2912672818/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719224413660584000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2912672818/001/test-1719224413660584000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.367375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.034583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.049833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.925208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.907208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.963334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.4975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo umount -f /mount-9p": exit status 83 (47.531875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2912672818/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port759897281/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.066916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.451959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.086791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.22ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.155416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.871458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.103542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.803666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "sudo umount -f /mount-9p": exit status 83 (44.712625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-880000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port759897281/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (80.465541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (84.990334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (88.156875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (85.72425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (87.096792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (87.720541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-880000 ssh "findmnt -T" /mount1: exit status 83 (87.407875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-880000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-880000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-880000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3595197290/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.12s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-027000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-871000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-871000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-871000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-871000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-871000"

                                                
                                                
----------------------- debugLogs end: cilium-871000 [took: 2.153770125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-871000
--- SKIP: TestNetworkPlugins/group/cilium (2.38s)

                                                
                                    
Copied to clipboard